Absolute Free Dating For 50+seniors In Clearfield Pa

MOSCOW, November 30. /TASS/. Providing assistance to citizens and industries that were most affected by the coronavirus pandemic is a priority for the Russian economy, President Vladimir Putin said speaking at a plenary session of the VTB Capital investment forum Russia Calling!

AbsoluteCounty

Absolute Free Dating For 50 Seniors In Clearfield Pa County

How We Ranked the Free Senior Dating Sites? Whether you are looking at places to start dating seniors in your local area online, would like to make new. Subscribe for free. Google, Amazon and Microsoft have all indicated September as their full reopening date, though spokespeople for both Facebook.

'We understand that everything related to today's difficulties affects people. Therefore, we are trying to make well-considered decisions to support the economy - the most affected industries, in the first place, and citizens, especially those categories that need state support. These are elderly people, pensioners, families with children,' he said.

The head of state reiterated that the country's authorities had decided to raise the minimum subsidence level and the minimum living wage because this will affect a whole range of benefits, which are based on this.

Absolute Free Dating For 50+seniors In Clearfield Pa

Absolute Free Dating For 50 Seniors In Clearfield Pa Today

Absolute free dating for 50+seniors in clearfield pa map

Earlier, President Vladimir Putin noted that the draft federal budget for next year now provides for the upward adjustment of the minimum subsistence level by 2.5%, which is not enough. The head of state proposed raising the minimum subsistence level for 2022 and to increase it at a faster pace than inflation - by 8.6%. In absolute terms, for the country as a whole, the minimum subsistence level should reach 12,654 rubles ($170) a month, which is 1,000 rubles more than now.

Preview only show first 10 pages with watermark. For full document please download

Page iv. Trimsize 8in x 10in Fitzergald ffirs.tex V1 - July 25, 2014 9:32 A.M. Page i. Business Data Communications and Networking Twelfth Edition. Jerry FitzGerald. Jerry FitzGerald & Associates. Alan Dennis Indiana University. Alexandra Durcikova University of Oklahoma Trimsize 8in x 10in Fitzergald ffirs.tex V1 - July 25, ...

Absolute Free Dating For 50+seniors In Clearfield Pa

Absolute Free Dating For 50+seniors In Clearfield Pa Area

  • Date

  • Size

  • Views

  • Categories

Transcript

Trimsize Trim Size: 8in x 10inFitzergald f01.tex V2 - July 3, 2014 7:12 P.M. Page ivTrimsize 8in x 10in Fitzergald ffirs.tex V1 - July 25, 2014 9:32 A.M. Page iBusiness Data Communications and Networking Twelfth EditionJerry FitzGerald Jerry FitzGerald & AssociatesAlan Dennis Indiana UniversityAlexandra Durcikova University of OklahomaTrimsize 8in x 10in Fitzergald ffirs.tex V1 - July 25, 2014 9:32 A.M. Page iiTo my beautiful wife Kelly AD VICE PRESIDENT AND EXECUTIVE PUBLISHER Don Fowley EXECUTIVE EDITOR Beth Lang Golub EDITORIAL ASSISTANT Jayne Ziemba SPONSORING EDITOR Mary O’Sullivan PROJECT EDITOR Ellen Keohane MARKETING MANAGER Margaret Barrett MARKETING ASSISTANT Elisa Wong SENIOR PRODUCT DESIGNER Lydia Cheng ASSOCIATE EDITOR Christina Volpe PHOTO EDITOR James Russiello SENIOR DESIGNER Maureen Eide ASSOCIATE PRODUCTION MANAGER Joyce Poh SENIOR PRODUCTION EDITOR Yee Lyn Song PRODUCTION SERVICES Sangeetha Parthasarathy/Laserwords COVER DESIGNER Wendy Lai COVER CREDIT © Rawpixel / iStockphoto This book was set in Times Roman by Laserwords Private Limited, Chennai, India and printed and bound by Courier Kendallville. The cover was printed by Courier Kendallville. This book is printed on acid-free paper. Founded in 1807, John Wiley & Sons, Inc., has been a valued source of knowledge and understanding for more than 200 years, helping people around the world meet their needs and fulfill their aspirations. Our company is built on a foundation of principles that include responsibility to the communities we serve and where we live and work. In 2008, we launched a Corporate Citizenship Initiative, a global effort to address the environmental, social, economic, and ethical challenges we face in our business. Among the issues we are addressing are carbon impact, paper specifications and procurement, ethical conduct within our business and among our vendors, and community and charitable support. For more information, please visit our website: www.wiley.com/go/citizenship. Copyright © 2015, 2012, 2009, 2007, John Wiley & Sons, Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, website www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030-5774, phone (201) 748-6011, fax (201) 748-6008, website http://www.wiley.com/go/permissions. Evaluation copies are provided to qualified academics and professionals for review purposes only, for use in their courses during the next academic year. These copies are licensed and may not be sold or transferred to a third party. Upon completion of the review period, please return the evaluation copy to Wiley. Return instructions and a free-of-charge return mailing label are available at www.wiley.com/go/returnlabel. If you have chosen to adopt this textbook for use in your course, please accept this book as your complimentary desk copy. Outside of the United States, please contact your local sales representative. Library of Congress Cataloging-in-Publication Data FitzGerald, Jerry, 1936Business data communications and networking / Jerry FitzGerald, Jerry FitzGerald & Associates, Alan Dennis, Indiana University, Alexandra Durcikova, University of Arizona. – Twelfth edition. pages cm Includes bibliographical references and index. ISBN 978-1-118-89168-1 (paperback) 1. Data transmission systems. 2. Computer networks. 3. Office practice–Automation. I. Dennis, Alan. II. Durcikova, Alexandra. III. Title. TK5105.F577 2015 004.6–dc23 2014023087 Printed in the United States of America 10 9 8 7 6 5 4 3 2 1Trimsize Trim Size: 8in x 10inFitzergald f01.tex V2 - July 3, 2014 7:12 P.M.ABOUT THE AUTHORS Alan Dennis is professor of information systems in the Kelley School of Business at Indiana University and holds the John T. Chambers Chair in Internet Systems. The Chambers Chair was established to honor John Chambers, president and chief executive officer of Cisco Systems, the worldwide leader of networking technologies for the Internet. Prior to joining Indiana University, Alan spent nine years as a professor at the University of Georgia, where he won the Richard B. Russell Award for Excellence in Undergraduate Teaching. He has a bachelor’s degree in computer science from Acadia University in Nova Scotia, Canada, and an MBA from Queen’s University in Ontario, Canada. His PhD in management of information systems is from the University of Arizona. Prior to entering the Arizona doctoral program, he spent three years on the faculty of the Queen’s School of Business. Alan has extensive experience in the development and application of groupware and Internet technologies and co-founded Courseload, an electronic textbook company whose goal is to improve learning and reduce the cost of textbooks. He has won many awards for theoretical and applied research and has published more than 150 business and research articles, including those in Management Science, MIS Quarterly, Information Systems Research, Academy of Management Journal, Organization Behavior and Human Decision Making, Journal of Applied Psychology, Communications of the ACM, and IEEE Transactions of Systems, Man, and Cybernetics. His first book was Getting Started with Microcomputers, published in 1986. Alan is also an author of two systems analysis and design books published by Wiley. He is the cochair of the Internet Technologies Track of the Hawaii International Conference on System Sciences. He has served as a consultant to BellSouth, Boeing, IBM, Hughes Missile Systems, the U.S. Department of Defense, and the Australian Army. Alexandra Durcikova is an Assistant Professor at the Price College of Business, University of Oklahoma. Alexandra has a PhD in management information systems from the University of Pittsburgh. She has earned a MSc degree in solid state physics from Comenius University, Bratislava, worked as an experimental physics researcher in the area of superconductivity and as an instructor of executive MBA students prior to pursuing her PhD. Alexandra’s research interests include knowledge management and knowledge management systems, the role of organizational climate in the use of knowledge management systems, knowledge management system characteristics, governance mechanisms in the use of knowledge management systems, and human compliance with security policy and characteristics of successful phishing attempts within the area of network security. Her research appears in Information Systems Research, Journal of Management Information Systems, Information Systems Journal, Journal of Organizational and End User Computing, International Journal of Human-Computer Studies, International Journal of Human-Computer Studies, and Communications of the ACM. Alexandra has been teaching business data communications to both undergraduate and graduate students for several years. In addition, she has been teaching classes on information technology strategy and most recently won the Dean’s Award for Undergraduate Teaching Excellence while teaching at the University of Arizona. Dr. Jerry FitzGerald wrote the early editions of this book in the 1980s. At the time, he was the principal in Jerry FitzGerald & Associates, a firm he started in 1977.iiiPage iiiTrimsize Trim Size: 8in x 10inFitzergald f01.tex V2 - July 3, 2014 7:12 P.M. Page ivTrimsize Trim Size: 8in x 10inFitzergald f02.tex V2 - July 3, 2014 7:13 P.M. Page vPREFACE The field of data communications has grown faster and become more important than computer processing itself. Though they go hand in hand, the ability to communicate and connect with other computers and mobile devices is what makes or breaks a business today. There are three trends that support this notion. First, the wireless LAN and Bring-Your-Own-Device (BYOD) allow us to stay connected not only with the workplace but also with family and friends. Second, computers and networks are becoming an essential part of not only computers but also devices we use for other purpose, such as kitchen appliances. This web of things allows you to set the thermostat in your home from your mobile phone, can help you cook a dinner, or eventually can allow you to drive to work without ever touching the steering wheel. Lastly, we see that a lot of life is moving online. At first this started with games, but education, politics, and activism followed swiftly. Therefore, understanding how networks work; how they should be set up to support scalability, mobility, and security; and how to manage them is of utmost importance to any business. This need will call not only for engineers who deeply understand the technical aspects of networks but also for highly social individuals who embrace technology in creative ways to allow business to achieve a competitive edge through utilizing this technology. So the call is for you who are reading this book—you are at the right place at the right time!PURPOSE OF THIS BOOK Our goal is to combine the fundamental concepts of data communications and networking with practical applications. Although technologies and applications change rapidly, the fundamental concepts evolve much more slowly; they provide the foundation from which new technologies and applications can be understood, evaluated, and compared. This book has two intended audiences. First and foremost, it is a university textbook. Each chapter introduces, describes, and then summarizes fundamental concepts and applications. Management Focus boxes highlight key issues and describe how networks are actually being used today. Technical Focus boxes highlight key technical issues and provide additional detail. Mini case studies at the end of each chapter provide the opportunity to apply these technical and management concepts. Hands-on exercises help to reinforce the concepts introduced in the chapter. Moreover, the text is accompanied by a detailed Instructor’s Manual that provides additional background information, teaching tips, and sources of material for student exercises, assignments, and exams. Finally, our Web page contains supplements to our book. Second, this book is intended for the professional who works in data communications and networking. The book has many detailed descriptions of the technical aspects of communications, along with illustrations where appropriate. Moreover, managerial, technical, and sales personnel can use this book to gain a better understanding of fundamental concepts and trade-offs not presented in technical books or product summaries.vTrimsize Trim Size: 8in x 10inFitzergald f02.tex V2 - July 3, 2014 7:13 P.M. Page vivi PrefaceWHAT’S NEW IN THIS EDITION The twelfth edition maintains the three main themes of the eleventh edition, namely, (1) how networks work (Chapters 1–5); (2) network technologies (Chapters 6–10); and network security and management (Chapters 11 and 12). In the new edition, we removed older technologies and replaced them with new ones. Accordingly, new hands-on activities and questions have been added at the end of each chapter that guide students in understanding how to select technologies to build a network that would support an organization’s business needs. In addition to this overarching change, the twelfth edition has five major changes from the eleventh edition: First, we revised Chapter 1 to explain the three main themes of the book and to help students better understand why they should care about them. The second major change is that this edition focuses on the design of networks. We introduce a comprehensive framework for network design in Chapter 6 that is supported by an ongoing case study at the ends of Chapters 6–10 that walks the students through network design step by step. This modification leads to the third change: Chapters 6–12 are designed in a way that can be used for a “flipped classroom” style of teaching as well as the traditional lecture approach. Students are motivated to learn about LANs and WLANs (Chapter 7), BNs (Chapter 8), WANs (Chapter 9), and the Internet (Chapter 10) because they are designing a network for an organization. Fourth, Chapter 5 has a detailed discussion with three new hands-on activities that describe subnetting for IPv4 and one activity that focuses on IPv6. Finally, Chapter 11, which discusses network security, introduces a new framework for risk assessment that builds on currently accepted industry standards. It walks students through risk assessment in an easily comprehensible way.LAB EXERCISES www.wiley.com/college/fitzgerald This edition includes an online lab manual with many hands-on exercises that can be used in a networking lab. These exercises include configuring servers and other additional practical topics.ONLINE SUPPLEMENTS FOR INSTRUCTORS www.wiley.com/college/fitzgerald Instructor’s supplements comprise an Instructor’s Manual that includes teaching tips, war stories and answers to end-of-chapter questions, a Test Bank that includes true-false, multiple choice, short answer, and essay test questions for each chapter, and Lecture Slides in PowerPoint for classroom presentations. All are available on the instructor’s book companion site.E-BOOK Wiley E-Text: Powered by VitalSource offers students continuing access to materials for their course. Your students can access content on a mobile device, online from any Internet-connected computer, or by a computer via download. With dynamic features built into this e-text, students can search across content, highlight, and take notes that they can share with teachers and classmates. Readers will also have access to interactive images and embedded podcasts. Visit www.wiley.com/college/fitzgerald for more information.Trimsize Trim Size: 8in x 10inFitzergald f02.tex V2 - July 3, 2014 7:13 P.M. Page viiPreface viiACKNOWLEDGMENTS Our thanks to the many people who helped in preparing this edition. Specifically, we want to thank the staff at John Wiley & Sons for their support, including Ellen Keohane, Mary O’Sullivan, Elizabeth Pearson, and Yee Lyn Song. We also want to thank the reviewers whose comments helped us improve this book: Hans-Joachim Adler, University of Texas at Dallas Zenaida Bodwin, Northern Virginia Community College Thomas Case, Georgia Southern University Jimmie Cauley II, University of Houston Rangadhar Dash, University of Texas at Arlington Bob Gehling, Auburn University, Montgomery Joseph Hasley, Metropolitan State University of Denver William G. Heninger, Brigham Young University Robert Hogan, University of Alabama Margaret Leary, Northern Virginia Community College Eleanor T. Loiacono, Worcester Polytechnic Institute Mohamed Mahgoub, New Jersey Institute of Technology Brad Mattocks, California Lutheran University Carlos Oliveira, University of California Irvine Don Riley, University of Maryland Joseph H. Schuessler, Tarleton State University Myron Sheu, California State University, Dominguez Hills Jean G. Smith, Technical College of the Lowcountry James Stephenson, Western International University Manjit Taneja, Northern Virginia Community College Mehmet Ulema, Manhattan College Jingguo Wang, University of Texas, Arlington Cartmell Warrington, SUNY Orange Qing Yan, Grantham University Shahid Zaheer, Fairleigh Dickinson University Alan Dennis Bloomington, Indiana www.kelley.indiana.edu/ardennis Alexandra Durcikova Norman, OklahomaTrimsize Trim Size: 8in x 10inFitzergald f02.tex V2 - July 3, 2014 7:13 P.M. Page viiiTrimsize 8in x 10in Fitzergald ftoc.tex V1 - July 3, 2014 7:22 P.M.CONTENTS About the Authors iii Preface vPART ONE INTRODUCTION12.3Chapter 1Introduction to Data Communications 1 1.1 1.21.31.41.51.62.4Introduction 1 Data Communications Networks 4 1.2.1 Components of a Network 4 1.2.2 Types of Networks 6 Network Models 7 1.3.1 Open Systems Interconnection Reference Model 8 1.3.2 Internet Model 9 1.3.3 Message Transmission Using Layers 10 Network Standards 13 1.4.1 The Importance of Standards 13 1.4.2 The Standards-Making Process 13 1.4.3 Common Standards 16 Future Trends 16 1.5.1 Wireless LAN and BYOD 16 1.5.2 The Web of Things 17 1.5.3 Massively Online 17 Implications for Management 18PART TWO FUNDAMENTAL CONCEPTS2.52.6Chapter 3Physical Layer 3.1 3.23.326Chapter 2Application Layer 2.1 2.226Introduction 26 Application Architectures 272.2.1 Host-Based Architectures 28 2.2.2 Client-Based Architectures 28 2.2.3 Client-Server Architectures 29 2.2.4 Cloud Computing Architectures 32 2.2.5 Peer-to-Peer Architectures 34 2.2.6 Choosing Architectures 35 World Wide Web 36 2.3.1 How the Web Works 36 2.3.2 Inside an HTTP Request 37 2.3.3 Inside an HTTP Response 38 Electronic Mail 39 2.4.1 How Email Works 40 2.4.2 Inside an SMTP Packet 43 2.4.3 Attachments in Multipurpose Internet Mail Extension 43 Other Applications 44 2.5.1 Telnet 44 2.5.2 Instant Messaging 45 2.5.3 Videoconferencing 46 Implications for Management 483.460Introduction 60 Circuits 62 3.2.1 Circuit Configuration 62 3.2.2 Data Flow 63 3.2.3 Multiplexing 64 Communication Media 66 3.3.1 Twisted Pair Cable 66 3.3.2 Coaxial Cable 67 3.3.3 Fiber-Optic Cable 67 3.3.4 Radio 69 3.3.5 Microwave 69 3.3.6 Satellite 70 3.3.7 Media Selection 71 Digital Transmission of Digital Data 72 3.4.1 Coding 72 3.4.2 Transmission Modes 73 ixPage ixTrimsize 8in x 10in Fitzergald ftoc.tex V1 - July 3, 2014 7:22 P.M. Page xx Contents3.53.63.73.4.3 Digital Transmission 74 3.4.4 How Ethernet Transmits Data 75 Analog Transmission of Digital Data 76 3.5.1 Modulation 77 3.5.2 Capacity of a Circuit 79 3.5.3 How Modems Transmit Data 80 Digital Transmission of Analog Data 80 3.6.1 Translating from Analog to Digital 80 3.6.2 How Telephones Transmit Voice Data 81 3.6.3 How Instant Messenger Transmits Voice Data 83 3.6.4 Voice over Internet Protocol (VoIP) 83 Implications for Management 84Chapter 4Data Link Layer 4.1 4.24.34.44.5 4.692Introduction 92 Media Access Control 93 4.2.1 Contention 93 4.2.2 Controlled Access 93 4.2.3 Relative Performance 94 Error Control 95 4.3.1 Sources of Errors 96 4.3.2 Error Prevention 97 4.3.3 Error Detection 98 4.3.4 Error Correction via Retransmission 99 4.3.5 Forward Error Correction 102 4.3.6 Error Control in Practice 102 Data Link Protocols 103 4.4.1 Asynchronous Transmission 103 4.4.2 Synchronous Transmission 104 Transmission Efficiency 107 Implications for Management 1095.45.55.65.7PART THREE NETWORK TECHNOLOGIES 166 Chapter 6Network Design 6.16.2Chapter 5Network and Transport Layers 116 5.1 5.25.3Introduction 116 Transport and Network Layer Protocols 118 5.2.1 Transmission Control Protocol (TCP) 118 5.2.2 Internet Protocol (IP) 119 Transport Layer Functions 1205.3.1 Linking to the Application Layer 120 5.3.2 Segmenting 121 5.3.3 Session Management 122 Addressing 124 5.4.1 Assigning Addresses 124 5.4.2 Address Resolution 130 Routing 132 5.5.1 Types of Routing 134 5.5.2 Routing Protocols 135 5.5.3 Multicasting 137 5.5.4 The Anatomy of a Router 138 TCP/IP Example 140 5.6.1 Known Addresses, Same Subnet 140 5.6.2 Known Addresses, Different Subnet 143 5.6.3 Unknown Addresses 144 5.6.4 TCP Connections 144 5.6.5 TCP/IP and Network Layers 145 Implications for Management 1476.36.4166Introduction 166 6.1.1 Network Architecture Components 166 6.1.2 The Traditional Network Design Process 168 6.1.3 The Building-Block Network Design Process 169 Needs Analysis 171 6.2.1 Network Architecture Component 172 6.2.2 Application Systems 173 6.2.3 Network Users 173 6.2.4 Categorizing Network Needs 173 6.2.5 Deliverables 174 Technology Design 175 6.3.1 Designing Clients and Servers 175 6.3.2 Designing Circuits 175 6.3.3 Network Design Tools 177 6.3.4 Deliverables 178 Cost Assessment 178 6.4.1 Request for Proposal 178Trimsize 8in x 10in Fitzergald ftoc.tex V1 - July 3, 2014 7:22 P.M.Contents xi6.4.26.5Selling the Proposal to Management 179 6.4.3 Deliverables 180 Implications for Management 180Chapter 7Wired and Wireless Local Area Networks 184 7.1 7.27.37.47.57.67.7Introduction 184 LAN Components 185 7.2.1 Network Interface Cards 186 7.2.2 Network Circuits 186 7.2.3 Network Hubs, Switches, and Access Points 187 7.2.4 Network Operating Systems 190 Wired Ethernet 191 7.3.1 Topology 191 7.3.2 Media Access Control 194 7.3.3 Types of Ethernet 195 Wireless Ethernet 196 7.4.1 Topology 196 7.4.2 Media Access Control 196 7.4.3 Wireless Ethernet Frame Layout 197 7.4.4 Types of Wireless Ethernet 198 7.4.5 Security 199 The Best Practice LAN Design 201 7.5.1 Designing User Access with Wired Ethernet 202 7.5.2 Designing User Access with Wireless Ethernet 202 7.5.3 Designing the Data Center 204 7.5.4 Designing the e-Commerce Edge 206 7.5.5 Designing the SOHO Environment 207 Improving LAN Performance 208 7.6.1 Improving Server Performance 209 7.6.2 Improving Circuit Capacity 210 7.6.3 Reducing Network Demand 211 Implications for Management 211Chapter 8Backbone Networks 222 8.1 8.2 8.3 8.4Introduction 222 Switched Backbones 223 Routed Backbones 226 Virtual LANs 2298.5 8.68.7The Best Practice Backbone Design 234 Improving Backbone Performance 236 8.6.1 Improving Device Performance 236 8.6.2 Improving Circuit Capacity 236 8.6.3 Reducing Network Demand 236 Implications for Management 237Chapter 9Wide Area Networks 9.1 9.29.39.49.5 9.69.7245Introduction 245 Dedicated-Circuit Networks 246 9.2.1 Basic Architecture 246 9.2.2 T Carrier Services 249 9.2.3 SONET Services 251 Packet-Switched Networks 251 9.3.1 Basic Architecture 252 9.3.2 Frame Relay Services 253 9.3.3 Ethernet Services 254 9.3.4 MPLS Services 255 9.3.5 IP Services 256 Virtual Private Networks 257 9.4.1 Basic Architecture 257 9.4.2 VPN Types 258 9.4.3 How VPNs Work 258 The Best Practice WAN Design 261 Improving WAN Performance 262 9.6.1 Improving Device Performance 262 9.6.2 Improving Circuit Capacity 263 9.6.3 Reducing Network Demand 263 Implications for Management 264Chapter 10The Internet27610.1 Introduction 276 10.2 How the Internet Works 277 10.2.1 Basic Architecture 277 10.2.2 Connecting to an ISP 279 10.2.3 The Internet Today 280 10.3 Internet Access Technologies 281 10.3.1 Digital Subscriber Line (DSL) 281 10.3.2 Cable Modem 283 10.3.3 Fiber to the Home 285 10.3.4 WiMax 285 10.4 The Future of the Internet 286 10.4.1 Internet Governance 286 10.4.2 Building the Future 287 10.5 Implications for Management 289Page xiTrimsize 8in x 10in Fitzergald ftoc.tex V1 - July 3, 2014 7:22 P.M.xii ContentsPART FOUR NETWORK MANAGEMENT296Chapter 11Network Security29611.1 Introduction 296 11.1.1 Why Networks Need Security 298 11.1.2 Types of Security Threats 298 11.1.3 Network Controls 300 11.2 Risk Assessment 301 11.2.1 Develop risk measurement criteria 301 11.2.2 Inventory IT assets 302 11.2.3 Identify Threats 304 11.2.4 Document Existing Controls 307 11.2.5 Identify Improvements 308 11.3 Ensuring Business Continuity 308 11.3.1 Virus Protection 309 11.3.2 Denial of Service Protection 310 11.3.3 Theft Protection 313 11.3.4 Device Failure Protection 313 11.3.5 Disaster Protection 314 11.4 Intrusion Prevention 318 11.4.1 Security Policy 319 11.4.2 Perimeter Security and Firewalls 319 11.4.3 Server and Client Protection 325 11.4.4 Encryption 329 11.4.5 User Authentication 335 11.4.6 Preventing Social Engineering 338 11.4.7 Intrusion Prevention Systems 339 11.4.8 Intrusion Recovery 34111.5 Best Practice Recommendations 342 11.6 Implications for Management 344Chapter 12Network Management35312.1 Introduction 353 12.2 Designing for Network Performance 355 12.2.1 Managed Networks 355 12.2.2 Managing Network Traffic 359 12.2.3 Reducing Network Traffic 360 12.3 Configuration Management 363 12.3.1 Configuring the Network and Client Computers 363 12.3.2 Documenting the Configuration 364 12.4 Performance and Fault Management 366 12.4.1 Network Monitoring 366 12.4.2 Failure Control Function 368 12.4.3 Performance and Failure Statistics 370 12.4.4 Improving Performance 373 12.5 End User Support 373 12.5.1 Resolving Problems 373 12.5.2 Providing End User Training 375 12.6 Cost Management 375 12.6.1 Sources of Costs 375 12.6.2 Reducing Costs 378 12.7 Implications for Management 380 Appendices (Online) Glossary (Online) Index 389Page xiiTrimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 1PART ONE INTRODUCTIONCHAPTER 1 INTRODUCTION TO DATA COMMUNICATIONS This chapter introduces the basic concepts of data communications. It describes why itis important to study data communications and introduces you to the three fundamental questions that this book answers. Next, it discusses the basic types and components of a data communications network. Also, it examines the importance of a network model based on layers. Finally, it describes the three key trends in the future of networking.OBJECTIVESOUTLINE◾ ◾ ◾ ◾ ◾ ◾Be aware of the three fundamental questions this book answers Be aware of the applications of data communications networks Be familiar with the major components of and types of networks Understand the role of network layers Be familiar with the role of network standards Be aware of three key trends in communications and networking1.1 Introduction 1.2 Data Communications Networks 1.2.1 Components of a Network 1.2.2 Types of Networks 1.3 Network Models 1.3.1 Open Systems Interconnection Reference Model 1.3.2 Internet Model 1.3.3 Message Transmission Using Layers 1.4 Network Standards1.4.1 The Importance of Standards 1.4.2 The Standards-Making Process 1.4.3 Common Standards 1.5 Future Trends 1.5.1 Wireless LAN and BYOD 1.5.2 The Web of Things 1.5.3 Massively Online 1.6 Implications for Management Summary1.1 INTRODUCTION What Internet connection should you use? Cable modem or DSL (formally called Digital Subscriber Line)? Cable modems are supposedly faster than DSL, providing data speeds of 50 Mbps to DSL’s 1.5–25 Mbps (million bits per second). One cable company used a tortoise to represent DSL in advertisements. So which is faster? We’ll give you a hint. Which won the race in the fable, the tortoise or the hare? By the time you finish this book, you’ll understand which is faster and why, as well as why choosing the right company as your Internet service provider (ISP) is probably more important than choosing the right technology. Over the past decade or so, it has become clear that the world has changed forever. We continue to forge our way through the Information Age—the second Industrial Revolution, according to John Chambers, CEO (chief executive officer) of Cisco Systems, Inc., one1Trimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 22 Chapter 1 Introduction to Data Communications of the world’s leading networking technology companies. The first Industrial Revolution revolutionized the way people worked by introducing machines and new organizational forms. New companies and industries emerged, and old ones died off. The second Industrial Revolution is revolutionizing the way people work through networking and data communications. The value of a high-speed data communications network is that it brings people together in a way never before possible. In the 1800s, it took several weeks for a message to reach North America by ship from England. By the 1900s, it could be transmitted within the hour. Today, it can be transmitted in seconds. Collapsing the information lag to Internet speeds means that people can communicate and access information anywhere in the world regardless of their physical location. In fact, today’s problem is that we cannot handle the quantities of information we receive. Data communications and networking is a truly global area of study, both because the technology enables global communication and because new technologies and applications often emerge from a variety of countries and spread rapidly around the world. The World Wide Web, for example, was born in a Swiss research lab, was nurtured through its first years primarily by European universities, and exploded into mainstream popular culture because of a development at an American research lab. One of the problems in studying a global phenomenon lies in explaining the different political and regulatory issues that have evolved and currently exist in different parts of the world. Rather than attempt to explain the different paths taken by different countries, we have chosen simplicity instead. Historically, the majority of readers of previous editions of this book have come from North America. Therefore, although we retain a global focus on technology and its business implications, we focus mostly on North America. This book answers three fundamental questions. First, how does the Internet work? When you access a Web site using your computer, laptop, iPad, or smart phone, what happens so that the page opens in your Web browser? This is the focus in Chapters 1–5. The short answer is that the software on your computer (or any device) creates a message composed in different software languages (HTTP, TCP/IP, and Ethernet are common) that requests the page you clicked. This message is then broken up into a series of smaller parts that we call packets. Each packet is transmitted to the nearest router, which is a special-purpose computer whose primary job is to find the best route for these packets to their final destination. The packets move from router to router over the Internet until they reach the Web server, which puts the packets back together into the same message that your computer created. The Web server reads your request and then sends the page back to you in the same way—by composing a message using HTTP, TCP/IP, and Ethernet and then sending it as a series of smaller packets back through the Internet that the software on your computer puts together into the page you requested. You might have heard a news story that the U.S. or Chinese government can read your email or see what Web sites you’re visiting. A more shocking truth is that the person sitting next you at a coffee shop might be doing exactly the same thing—reading all the packets that come from or go to your laptop. How is this possible, you ask? After finishing Chapter 5, you will know exactly how this is possible. Second, how do I design a network? This is the focus of Chapters 6–10. We often think about networks in four layers. The first layer is the Local Area Network, or the LAN (either wired or wireless), which enables users like you and me to access the network. The second is the backbone network that connects the different LANs within a building. The third is the core network that connects different buildings on a company’s campus. The final layer is connections we have to the other campuses within the organization and to the Internet. Each of these layers has slightly different concerns, so the way we design networks for them and the technologies we use are slightly different. Although this describes the standard for buildingTrimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 3Introduction 3corporate networks, you will have a much better understanding of how your wireless router at home works. Perhaps more importantly, you’ll learn why buying the newest and fastest wireless router for your house or apartment is probably not a good way to spend your money. Finally, how do I manage my network to make sure it is secure, provides good performance, and doesn’t cost too much? This is the focus of Chapters 11 and 12. Would it surprise you to learn that most companies spend between $1,500 and $3,500 per computer per year on network management and security? Yup, we spend way more on network management and security each year than we spend to buy the computer in the first place. And that’s for well-run networks; poorly run networks cost a lot more. Many people think network security is a technical problem, and to some extent, it is. However, the things people do and don’t do cause more security risks than not having the latest technology. According to Symantec, one of the leading companies that sells antivirus software, about half of all security threats are not prevented by their software. These threats are called targeted attacks, such as phishing attacks (which are emails that look real but instead take you to fake Web sites) or ransomware (software apps that appear to be useful but actually lock your computer and demand a payment to unlock it). Therefore, network management is as much a people management issue as it is a technology management issue. By the time you finish this book, you’ll understand how networks work, how to design networks, and how to manage networks. You won’t be an expert, but you’ll be ready to enter an organization or move on to more advanced courses. sss MANAGEMENT1-1 Career OpportunitiesFOCUSI t’s a great time to be in information technology (IT)! The technology-fueled new economy has dramatically increased the demand for skilled IT professions. According to the U.S. Bureau of Labor Statistics, the second fastest growing occupation is data communications and networking analyst, which is expected to grow by 53% by 2018 and create 150,000 new jobs with an annual median salary of $71,100—not counting bonuses. There are two reasons for this growth. First, companies have to continuously upgrade their networks and thus need skilled employees to support their expanding IT infrastructure. Second, people are spending more time on their mobile devices, and because employers are allowing them to use these personal devices at work (i.e., BYOD, or bring your own device), the network infrastructure has to support the data that flow from these devices as well as making sure that they don’t pose a security risk. With a few years of experience, there is the possibility to work as an information systems manager, for which the median annual pay is as high as $117,780. An information systems manager plans, coordinates, and directs IT-related activities in such a way that they canfully support the goals of any business. Thus, this job requires a good understanding not only of the business but also of the technology so that appropriate and reliable technology can be implemented at a reasonable cost to keep everything operating smoothly and to guard against cybercriminals. Because of the expanding job market for IT and networking-related jobs, certifications become important. Most large vendors of network technologies, such as the Microsoft Corporation and Cisco Systems Inc., provide certification processes (usually a series of courses and formal exams) so that individuals can document their knowledge. Certified network professional often earn $10,000 to $15,000 more than similarly skilled uncertified professionals—provided that they continue to learn and maintain their certification as new technologies emerge. Adapted from: http://jobs.aol.com, “In Demand Careers That Pay $100,00 a Year or More”; www.careerpath.com, “Today’s 20 Fastest-Growing Occupations”; www.cnn.com, “30 Jobs Needing Most Workers in Next Decade.”Trimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 44 Chapter 1 Introduction to Data Communications1.2 DATA COMMUNICATIONS NETWORKS Data communications is the movement of computer information from one point to another by means of electrical or optical transmission systems. Such systems are often called data communications networks. This is in contrast to the broader term telecommunications, which includes the transmission of voice and video (images and graphics) as well as data and usually implies longer distances. In general, data communications networks collect data from personal computers and other devices and transmit those data to a central server that is a more powerful personal computer, minicomputer, or mainframe, or they perform the reverse process, or some combination of the two. Data communications networks facilitate more efficient use of computers and improve the day-to-day control of a business by providing faster information flow. They also provide message transfer services to allow computer users to talk to one another via email, chat, and video streaming.TECHNICAL1-1 Internet Domain NamesFOCUS I nternet address names are strictly controlled; otherwise, someone could add a computer to the Internet that had the same address as another computer. Each address name has two parts, the computer name and its domain. The general format of an Internet address is therefore computer.domain. Some computer names have several parts separated by periods, so some addresses have the format computer.computer.computer.domain. For example, the main university Web server at Indiana University (IU) is called www.indiana.edu, whereas the Web server for the Kelley School of Business at IU is www.kelley.indiana .edu. Since the Internet began in the United States, the American address board was the first to assign domain names to indicate types of organizations. Some common U.S. domain names are EDU COM GOV MIL ORGfor an educational institution, usually a university for a commercial business for a government department or agency for a military unit for a nonprofit organizationCA AU UK DEfor Canada for Australia for the United Kingdom for GermanyNew top-level domains that focus on specific types of businesses continue to be introduced, such as AERO MUSEUM NAME PRO BIZfor aerospace companies for museums for individuals for professionals, such as accountants and lawyers for businessesMany international domains structure their addresses in much the same way as the United States does. For example, Australia uses EDU to indicate academic institutions, so an address such as xyz.edu.au would indicate an Australian university. For a full list of domain names, see www.iana.org/root/db.As networks in other countries were connected to the Internet, they were assigned their own domain names. Some international domain names are1.2.1 Components of a Network There are three basic hardware components for a data communications network: a server (e.g., personal computer, mainframe), a client (e.g., personal computer, terminal), and a circuit (e.g., cable, modem) over which messages flow. Both the server and client also need special-purpose network software that enables them to communicate. The server stores data or software that can be accessed by the clients. In client-server computing, several servers may work together over the network with a client computer to support the business application.Trimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 5Data Communications Networks 5The client is the input-output hardware device at the user’s end of a communication circuit. It typically provides users with access to the network and the data and software on the server. The circuit is the pathway through which the messages travel. It is typically a copper wire, although fiber-optic cable and wireless transmission are becoming common. There are many devices in the circuit that perform special functions such as switches and routers. Strictly speaking, a network does not need a server. Some networks are designed to connect a set of similar computers that share their data and software with each other. Such networks are called peer-to-peer networks because the computers function as equals, rather than relying on a central server to store the needed data and software. Figure 1-1 shows a small network that has four personal computers (clients) connected by a switch and cables (circuit). In this network, messages move through the switch to and from the computers. All computers share the same circuit and must take turns sending messages. The router is a special device that connects two or more networks. The router enables computers on this network to communicate with computers on other networks (e.g., the Internet). The network in Figure 1-1 has three servers. Although one server can perform many functions, networks are often designed so that a separate computer is used to provide different services. The file server stores data and software that can be used by computers on the network. The print server, which is connected to a printer, manages all printing requests from the clients on the network. The Web server stores documents and graphics that can be accessed from any Web browser, such as Internet Explorer. The Web server can respond to requests from computers on this network or any computer on the Internet. Servers are FIGURE 1-1 Example of a local area network (LAN)To other networks (e.g., the Internet)RouterFile server SwitchWeb server Client computersPrint server PrinterTrimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 66 Chapter 1 Introduction to Data Communications usually personal computers (often more powerful than the other personal computers on the network) but may be minicomputers or mainframes.1.2.2 Types of Networks There are many different ways to categorize networks. One of the most common ways is to look at the geographic scope of the network. Figure 1-2 illustrates four types of networks: local area networks (LANs), backbone networks (BNs), and wide area networks (WANs). The distinctions among these are becoming blurry because some network technologies now used in LANs were originally developed for WANs, and vice versa. Any rigid classification of technologies is certain to have exceptions. A local area network (LAN) is a group of computers located in the same general area. A LAN covers a clearly defined small area, such as one floor or work area, a single building, or a group of buildings. The upper left diagram in Figure 1-2 shows a small LAN located in the records building at the former McClellan Air Force Base in Sacramento. LANs support high-speed data transmission compared with standard telephone circuits, commonlyWeb serverSwitch Router Main gateLocal area network (LAN) at the Records Building—one node of the McClellan Air Force Base backbone network (BN).Records buildingFlight building Runway checkout Fire stationHangarsGateway to Sacramento metropolitan area networkSeattle, Wash. Portland, Oreg.Ontario, N.Y. Sudbury, Mass.Sacramento, Calif. (Capitol)Golden, Colo.Evanston, Ill.Backbone network (BN) at the McClellan Air Force Base—one node of the Sacramento metropolitan area network (MAN).Phoenix, Ariz.Houston, Tex. Miami, Fla.Wide area network (WAN) showing Sacramento connected to nine other cities throughout the United States.FIGURE 1-2 The hierarchical relationship of a local area network (LAN) to a backbone network (BN) to a wide area network (WAN)Trimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 7Network Models 7operating 100 million bits per second (100 Mbps). LANs and wireless LANs are discussed in detail in Chapter 6. Most LANs are connected to a backbone network (BN), a larger, central network connecting several LANs, other BNs, MANs, and WANs. BNs typically span from hundreds of feet to several miles and provide very high-speed data transmission, commonly 100 to 1,000 Mbps. The second diagram in Figure 1-2 shows a BN that connects the LANs located in several buildings at McClellan Air Force Base. BNs are discussed in detail in Chapter 7. Wide area networks (WANs) connect BNs and MANs (see Figure 1-2). Most organizations do not build their own WANs by laying cable, building microwave towers, or sending up satellites (unless they have unusually heavy data transmission needs or highly specialized requirements, such as those of the Department of Defense). Instead, most organizations lease circuits from IXCs (e.g., AT&T, Sprint) and use those to transmit their data. WAN circuits provided by IXCs come in all types and sizes but typically span hundreds or thousands of miles and provide data transmission rates from 64 Kbps to 10 Gbps. WANs are discussed in detail in Chapter 8. Two other common terms are intranets and extranets. An intranet is a LAN that uses the same technologies as the Internet (e.g., Web servers, Java, HTML [Hypertext Markup Language]) but is open to only those inside the organization. For example, although some pages on a Web server may be open to the public and accessible by anyone on the Internet, some pages may be on an intranet and therefore hidden from those who connect to the Web server from the Internet at large. Sometimes an intranet is provided by a completely separate Web server hidden from the Internet. The intranet for the Information Systems Department at Indiana University, for example, provides information on faculty expense budgets, class scheduling for future semesters (e.g., room, instructor), and discussion forums. An extranet is similar to an intranet in that it, too, uses the same technologies as the Internet but instead is provided to invited users outside the organization who access it over the Internet. It can provide access to information services, inventories, and other internal organizational databases that are provided only to customers, suppliers, or those who have paid for access. Typically, users are given passwords to gain access, but more sophisticated technologies such as smart cards or special software may also be required. Many universities provide extranets for Web-based courses so that only those students enrolled in the course can access course materials and discussions.1.3 NETWORK MODELS There are many ways to describe and analyze data communications networks. All networks provide the same basic functions to transfer a message from sender to receiver, but each network can use different network hardware and software to provide these functions. All of these hardware and software products have to work together to successfully transfer a message. One way to accomplish this is to break the entire set of communications functions into a series of layers, each of which can be defined separately. In this way, vendors can develop software and hardware to provide the functions of each layer separately. The software or hardware can work in any manner and can be easily updated and improved, as long as the interface between that layer and the ones around it remains unchanged. Each piece of hardware and software can then work together in the overall network. There are many different ways in which the network layers can be designed. The two most important network models are the Open Systems Interconnection Reference (OSI) model and the Internet model. The Internet model is the most commonly used of the two; few people use the OSI model, although understand it is commonly required for network certification exams.Trimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 88 Chapter 1 Introduction to Data Communications1.3.1 Open Systems Interconnection Reference Model The Open Systems Interconnection Reference model (usually called the OSI model for short) helped change the face of network computing. Before the OSI model, most commercial networks used by businesses were built using nonstandardized technologies developed by one vendor (remember that the Internet was in use at the time but was not widespread and certainly was not commercial). During the late 1970s, the International Organization for Standardization (ISO) created the Open System Interconnection Subcommittee, whose task was to develop a framework of standards for computer-to-computer communications. In 1984, this effort produced the OSI model. The OSI model is the most talked about and most referred to network model. If you choose a career in networking, questions about the OSI model will be on the network certification exams offered by Microsoft, Cisco, and other vendors of network hardware and software. However, you will probably never use a network based on the OSI model. Simply put, the OSI model never caught on commercially in North America, although some European networks use it, and some network components developed for use in the United States arguably use parts of it. Most networks today use the Internet model, which is discussed in the next section. However, because there are many similarities between the OSI model and the Internet model, and because most people in networking are expected to know the OSI model, we discuss it here. The OSI model has seven layers (see Figure 1-3). Layer 1: Physical Layer The physical layer is concerned primarily with transmitting data bits (zeros or ones) over a communication circuit. This layer defines the rules by which ones and zeros are transmitted, such as voltages of electricity, number of bits sent per second, and the physical format of the cables and connectors used. Layer 2: Data Link Layer The data link layer manages the physical transmission circuit in layer 1 and transforms it into a circuit that is free of transmission errors as far as layers above are concerned. Because layer 1 accepts and transmits only a raw stream of bits without understanding their meaning or structure, the data link layer must create and recognize message boundaries; that is, it must mark where a message starts and where it ends. Another major task of layer 2 is to solve the problems caused by damaged, lost, or duplicate messages so the succeeding layers are shielded from transmission errors. Thus, layer 2 performs error detection and correction. It also decides when a device can transmit so that two computers do not try to transmit at the same time.FIGURE 1-3 Network models. OSI = Open Systems Interconnection ReferenceOSI ModelInternet ModelGroups of LayersExamplesApplication LayerInternet Explorer and Web pagesInternetwork LayerTCP/IP softwareHardware LayerEthernet port, Ethernet cables, and Ethernet software drivers7. Application Layer 6. Presentation Layer5. Application Layer5. Session Layer 4. Transport Layer4. Transport Layer3. Network Layer3. Network Layer2. Data Link Layer2. Data Link Layer1. Physical Layer1. Physical LayerTrimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 9Network Models 9Layer 3: Network Layer The network layer performs routing. It determines the next computer to which the message should be sent so it can follow the best route through the network and finds the full address for that computer if needed. Layer 4: Transport Layer The transport layer deals with end-to-end issues, such as procedures for entering and departing from the network. It establishes, maintains, and terminates logical connections for the transfer of data between the original sender and the final destination of the message. It is responsible for breaking a large data transmission into smaller packets (if needed), ensuring that all the packets have been received, eliminating duplicate packets, and performing flow control to ensure that no computer is overwhelmed by the number of messages it receives. Although error control is performed by the data link layer, the transport layer can also perform error checking. Layer 5: Session Layer The session layer is responsible for managing and structuring all sessions. Session initiation must arrange for all the desired and required services between session participants, such as logging on to circuit equipment, transferring files, and performing security checks. Session termination provides an orderly way to end the session, as well as a means to abort a session prematurely. It may have some redundancy built in to recover from a broken transport (layer 4) connection in case of failure. The session layer also handles session accounting so the correct party receives the bill. Layer 6: Presentation Layer The presentation layer formats the data for presentation to the user. Its job is to accommodate different interfaces on different computers so the application program need not worry about them. It is concerned with displaying, formatting, and editing user inputs and outputs. For example, layer 6 might perform data compression, translation between different data formats, and screen formatting. Any function (except those in layers 1 through 5) that is requested sufficiently often to warrant finding a general solution is placed in the presentation layer, although some of these functions can be performed by separate hardware and software (e.g., encryption). Layer 7: Application Layer The application layer is the end user’s access to the network. The primary purpose is to provide a set of utilities for application programs. Each user program determines the set of messages and any action it might take on receipt of a message. Other network-specific applications at this layer include network monitoring and network management.1.3.2 Internet Model The network model that dominates current hardware and software is a more simple fivelayer Internet model. Unlike the OSI model that was developed by formal committees, the Internet model evolved from the work of thousands of people who developed pieces of the Internet. The OSI model is a formal standard that is documented in one standard, but the Internet model has never been formally defined; it has to be interpreted from a number of standards.1 The two models have very much in common (see Figure 1-3); simply put, the Internet model collapses the top three OSI layers into one layer. Because it is clear that the Internet has won the “war,” we use the five-layer Internet model for the rest of this book. Layer 1: The Physical Layer The physical layer in the Internet model, as in the OSI model, is the physical connection between the sender and receiver. Its role is to transfer a series 1 Overthe years, our view of the Internet layers has evolved, as has the Internet itself. It’s now clear that most of the Internet community thinks about networks using a five-layer view, so we’ll use it as well. As of this writing, however, Microsoft uses a four-layer view of the Internet for its certification exams.Trimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 1010 Chapter 1 Introduction to Data Communications of electrical, radio, or light signals through the circuit. The physical layer includes all the hardware devices (e.g., computers, modems, and switches) and physical media (e.g., cables and satellites). The physical layer specifies the type of connection and the electrical signals, radio waves, or light pulses that pass through it. Chapter 3 discusses the physical layer in detail. Layer 2: The Data Link Layer The data link layer is responsible for moving a message from one computer to the next computer in the network path from the sender to the receiver. The data link layer in the Internet model performs the same three functions as the data link layer in the OSI model. First, it controls the physical layer by deciding when to transmit messages over the media. Second, it formats the messages by indicating where they start and end. Third, it detects and may correct any errors that have occurred during transmission. Chapter 4 discusses the data link layer in detail. Layer 3: The Network Layer The network layer in the Internet model performs the same functions as the network layer in the OSI model. First, it performs routing, in that it selects the next computer to which the message should be sent. Second, it can find the address of that computer if it doesn’t already know it. Chapter 5 discusses the network layer in detail. Layer 4: The Transport Layer The transport layer in the Internet model is very similar to the transport layer in the OSI model. It performs two functions. First, it is responsible for linking the application layer software to the network and establishing end-to-end connections between the sender and receiver when such connections are needed. Second, it is responsible for breaking long messages into several smaller messages to make them easier to transmit and then recombining the smaller messages back into the original larger message at the receiving end. The transport layer can also detect lost messages and request that they be resent. Chapter 5 discusses the transport layer in detail. Layer 5: Application Layer The application layer is the application software used by the network user and includes much of what the OSI model contains in the application, presentation, and session layers. It is the user’s access to the network. By using the application software, the user defines what messages are sent over the network. Because it is the layer that most people understand best and because starting at the top sometimes helps people understand better, Chapter 2 begins with the application layer. It discusses the architecture of network applications and several types of network application software and the types of messages they generate. Groups of Layers The layers in the Internet are often so closely coupled that decisions in one layer impose certain requirements on other layers. The data link layer and the physical layer are closely tied together because the data link layer controls the physical layer in terms of when the physical layer can transmit. Because these two layers are so closely tied together, decisions about the data link layer often drive the decisions about the physical layer. For this reason, some people group the physical and data link layers together and call them the hardware layers. Likewise, the transport and network layers are so closely coupled that sometimes these layers are called the internetwork layer. See Figure 1-3. When you design a network, you often think about the network design in terms of three groups of layers: the hardware layers (physical and data link), the internetwork layers (network and transport), and the application layer.1.3.3 Message Transmission Using Layers Each computer in the network has software that operates at each of the layers and performs the functions required by those layers (the physical layer is hardware, not software). Each layer in the network uses a formal language, or protocol, that is simply a set of rules thatTrimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 11Network Models 11 SenderApplication LayerHTTPTransport LayerTCPNetwork LayerIP TCPData Link LayerEthernet IP TCPPDURequestHTTPRequestHTTPRequestHTTPRequestReceiverPacketApplication LayerSegmentTransport LayerTCPPacketNetwork LayerIP TCPFrameData Link LayerEthernet IP TCPPhysical LayerHTTPRequestHTTPRequestHTTPRequestHTTPRequestPhysical Layer BitFIGURE 1-4 Message transmission using layers. IP = Internet Protocol; HTTP = Hypertext Transfer Protocol; TCP = Transmission Control Protocoldefine what the layer will do and that provides a clearly defined set of messages that software at the layer needs to understand. For example, the protocol used for Web applications is HTTP (Hypertext Transfer Protocol, which is described in more detail in Chapter 2). In general, all messages sent in a network pass through all layers. All layers except the physical layer create a new Protocol Data Unit (PDU) as the message passes through them. The PDU contains information that is needed to transmit the message through the network. Some experts use the word packet to mean a PDU. Figure 1-4 shows how a message requesting a Web page would be sent on the Internet. Application Layer First, the user creates a message at the application layer using a Web browser by clicking on a link (e.g., get the home page at www.somebody.com). The browser translates the user’s message (the click on the Web link) into HTTP. The rules of HTTP define a specific PDU—called an HTTP packet—that all Web browsers must use when they request a Web page. For now, you can think of the HTTP packet as an envelope into which the user’s message (get the Web page) is placed. In the same way that an envelope placed in the mail needs certain information written in certain places (e.g., return address, destination address), so too does the HTTP packet. The Web browser fills in the necessary information in the HTTP packet, drops the user’s request inside the packet, then passes the HTTP packet (containing the Web page request) to the transport layer.Trimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 1212 Chapter 1 Introduction to Data Communications Transport Layer The transport layer on the Internet uses a protocol called TCP (Transmission Control Protocol), and it, too, has its own rules and its own PDUs. TCP is responsible for breaking large files into smaller packets and for opening a connection to the server for the transfer of a large set of packets. The transport layer places the HTTP packet inside a TCP PDU (which is called a TCP segment), fills in the information needed by the TCP segment, and passes the TCP segment (which contains the HTTP packet, which, in turn, contains the message) to the network layer. Network Layer The network layer on the Internet uses a protocol called IP (Internet Protocol), which has its rules and PDUs. IP selects the next stop on the message’s route through the network. It places the TCP segment inside an IP PDU, which is called an IP packet, and passes the IP packet, which contains the TCP segment, which, in turn, contains the HTTP packet, which, in turn, contains the message, to the data link layer. Data Link Layer If you are connecting to the Internet using a LAN, your data link layer may use a protocol called Ethernet, which also has its own rules and PDUs. The data link layer formats the message with start and stop markers, adds error checking information, places the IP packet inside an Ethernet PDU, which is called an Ethernet frame, and instructs the physical hardware to transmit the Ethernet frame, which contains the IP packet, which contains the TCP segment, which contains the HTTP packet, which contains the message. Physical Layer The physical layer in this case is network cable connecting your computer to the rest of the network. The computer will take the Ethernet frame (complete with the IP packet, the TCP segment, the HTTP packet, and the message) and send it as a series of electrical pulses through your cable to the server. When the server gets the message, this process is performed in reverse. The physical hardware translates the electrical pulses into computer data and passes the message to the data link layer. The data link layer uses the start and stop markers in the Ethernet frame to identify the message. The data link layer checks for errors and, if it discovers one, requests that the message be resent. If a message is received without error, the data link layer will strip off the Ethernet frame and pass the IP packet (which contains the TCP segment, the HTTP packet, and the message) to the network layer. The network layer checks the IP address and, if it is destined for this computer, strips off the IP packet and passes the TCP segment, which contains the HTTP packet and the message, to the transport layer. The transport layer processes the message, strips off the TCP segment, and passes the HTTP packet to the application layer for processing. The application layer (i.e., the Web server) reads the HTTP packet and the message it contains (the request for the Web page) and processes it by generating an HTTP packet containing the Web page you requested. Then the process starts again as the page is sent back to you. The Pros and Cons of Using Layers There are three important points in this example. First, there are many different software packages and many different PDUs that operate at different layers to successfully transfer a message. Networking is in some ways similar to the Russian matryoshka, nested dolls that fit neatly inside each other. This is called encapsulation, because the PDU at a higher level is placed inside the PDU at a lower level so that the lower-level PDU encapsulates the higher-level one. The major advantage of using different software and protocols is that it is easy to develop new software, because all one has to do is write software for one level at a time. The developers of Web applications, for example, do not need to write software to perform error checking or routing, because those are performed by the data link and network layers. Developers can simply assume those functions are performed and just focus on the application layer. Likewise, it is simple to change theTrimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 13Network Standards 13software at any level (or add new application protocols), as long as the interface between that layer and the ones around it remains unchanged. Second, it is important to note that for communication to be successful, each layer in one computer must be able to communicate with its matching layer in the other computer. For example, the physical layer connecting the client and server must use the same type of electrical signals to enable each to understand the other (or there must be a device to translate between them). Ensuring that the software used at the different layers is the same is accomplished by using standards. A standard defines a set of rules, called protocols, that explain exactly how hardware and software that conform to the standard are required to operate. Any hardware and software that conform to a standard can communicate with any other hardware and software that conform to the same standard. Without standards, it would be virtually impossible for computers to communicate. Third, the major disadvantage of using a layered network model is that it is somewhat inefficient. Because there are several layers, each with its own software and PDUs, sending a message involves many software programs (one for each protocol) and many PDUs. The PDUs add to the total amount of data that must be sent (thus increasing the time it takes to transmit), and the different software packages increase the processing power needed in computers. Because the protocols are used at different layers and are stacked on top of one another (take another look at Figure 1-4), the set of software used to understand the different protocols is often called a protocol stack.1.4 NETWORK STANDARDS 1.4.1 The Importance of Standards Standards are necessary in almost every business and public service entity. For example, before 1904, fire hose couplings in the United States were not standard, which meant a fire department in one community could not help in another community. The transmission of electric current was not standardized until the end of the nineteenth century, so customers had to choose between Thomas Edison’s direct current (DC) and George Westinghouse’s alternating current (AC). The primary reason for standards is to ensure that hardware and software produced by different vendors can work together. Without networking standards, it would be difficult—if not impossible—to develop networks that easily share information. Standards also mean that customers are not locked into one vendor. They can buy hardware and software from any vendor whose equipment meets the standard. In this way, standards help to promote more competition and hold down prices. The use of standards makes it much easier to develop software and hardware that link different networks because software and hardware can be developed one layer at a time.1.4.2 The Standards-Making Process There are two types of standards: de jure and de facto. A de jure standard is developed by an official industry or a government body and is often called a formal standard. For example, there are de jure standards for applications such as Web browsers (e.g., HTTP, HTML), for network layer software (e.g., IP), for data link layer software (e.g., Ethernet IEEE 802.3), and for physical hardware (e.g., V.90 modems). De jure standards typically take several years to develop, during which time technology changes, making them less useful. De facto standards are those that emerge in the marketplace and are supported by several vendors but have no official standing. For example, Microsoft Windows is a product of one company and has not been formally recognized by any standards organization, yet it is a de facto standard. In the communications industry, de facto standards often become de jure standards once they have been widely accepted.Trimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 1414 Chapter 1 Introduction to Data Communications The de jure standardization process has three stages: specification, identification of choices, and acceptance. The specification stage consists of developing a nomenclature and identifying the problems to be addressed. In the identification of choices stage, those working on the standard identify the various solutions and choose the optimum solution from among the alternatives. Acceptance, which is the most difficult stage, consists of defining the solution and getting recognized industry leaders to agree on a single, uniform solution. As with many other organizational processes that have the potential to influence the sales of hardware and software, standards-making processes are not immune to corporate politics and the influence of national governments. International Organization for Standardization One of the most important standardsmaking bodies is the International Organization for Standardization (ISO),2 which makes technical recommendations about data communication interfaces (see www.iso.org). ISO is based in Geneva, Switzerland. The membership is composed of the national standards organizations of each ISO member country. International Telecommunications Union—Telecommunications Group The Telecommunications Group (ITU-T) is the technical standards-setting organization of the United Nations International Telecommunications Union, which is also based in Geneva (see www.itu.int). ITU is composed of representatives from about 200 member countries. Membership was originally focused on just the public telephone companies in each country, but a major reorganization in 1993 changed this, and ITU now seeks members among public- and private-sector organizations who operate computer or communications networks (e.g., RBOCs) or build software and equipment for them (e.g., AT&T). American National Standards Institute The American National Standards Institute (ANSI) is the coordinating organization for the U.S. national system of standards for both technology and nontechnology (see www.ansi.org). ANSI has about 1,000 members from both public and private organizations in the United States. ANSI is a standardization organization, not a standards-making body, in that it accepts standards developed by other organizations and publishes them as American standards. Its role is to coordinate the development of voluntary national standards and to interact with ISO to develop national standards that comply with ISO’s international recommendations. ANSI is a voting participant in the ISO. Institute of Electrical and Electronics Engineers The Institute of Electrical and Electronics Engineers (IEEE) is a professional society in the United States whose Standards Association (IEEE-SA) develops standards (see www.standards.ieee.org). The IEEE-SA is probably most known for its standards for LANs. Other countries have similar groups; for example, the British counterpart of IEEE is the Institution of Electrical Engineers (IEE). Internet Engineering Task Force The IETF sets the standards that govern how much of the Internet will operate (see www.ietf.org). The IETF is unique in that it doesn’t really have official memberships. Quite literally anyone is welcome to join its mailing lists, attend its meetings, and comment on developing standards. The role of the IETF and other Internet organizations is discussed in more detail in Chapter 8; also, see the box entitled “How Network Protocols Become Standards.”2 You’reprobably wondering why the abbreviation is ISO, not IOS. Well, ISO is a word (not an acronym) derived from the Greek isos, meaning “equal.” The idea is that with standards, all are equal.Trimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 15Network Standards 15sss MANAGEMENT1-2 How Network Protocols Become StandardsFOCUST here are many standards organizations around the world, but perhaps the best known is the Internet Engineering Task Force (IETF). IETF sets the standards that govern how much of the Internet operates. The IETF, like all standards organizations, tries to seek consensus among those involved before issuing a standard. Usually, a standard begins as a protocol (i.e., a language or set of rules for operating) developed by a vendor (e.g., HTML [Hypertext Markup Language]). When a protocol is proposed for standardization, the IETF forms a working group of technical experts to study it. The working group examines the protocol to identify potential problems and possible extensions and improvements, then issues a report to the IETF. If the report is favorable, the IETF issues a Request for Comment (RFC) that describes the proposed standard and solicits comments from the entire world. Most large software companies likely to be affected by the proposed standard prepare detailed responses. Many “regular” Internet users also send their comments to the IETF. The IETF reviews the comments and possibly issues a new and improved RFC, which again is posted for more comments. Once no additional changes have been identified, it becomes a proposed standard.Usually, several vendors adopt the proposed standard and develop products based on it. Once at least two vendors have developed hardware or software based on it and it has proven successful in operation, the proposed standard is changed to a draft standard. This is usually the final specification, although some protocols have been elevated to Internet standards, which usually signifies mature standards not likely to change. The process does not focus solely on technical issues; almost 90% of the IETF’s participants work for manufacturers and vendors, so market forces and politics often complicate matters. One former IETF chairperson who worked for a hardware manufacturer has been accused of trying to delay the standards process until his company had a product ready, although he and other IETF members deny this. Likewise, former IETF directors have complained that members try to standardize every product their firms produce, leading to a proliferation of standards, only a few of which are truly useful. Sources: “How Networking Protocols Become Standards,” PC Week, March 17, 1997; “Growing Pains,” Network World, April 14, 1997.sss MANAGEMENT1-3 Keeping Up with TechnologyFOCUST he data communications and networking arena changes rapidly. Significant new technologies are introduced and new concepts are developed almost every year. It is therefore important for network managers to keep up with these changes. There are at least three useful ways to keep up with change. First and foremost for users of this book is the Web site for this book, which contains updates to the book, additional sections, teaching materials, and links to useful Web sites. Second, there are literally hundreds of thousands of Web sites with data communications and networkinginformation. Search engines can help you find them. A good initial starting point is the telecom glossary at www. atis.org. Two other useful sites are networkcomputing.com and zdnet.com. Third, there are many useful magazines that discuss computer technology in general and networking technology in particular, including Network Computing, Data Communications, Info World, Info Week, and CIO Magazine.Trimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 1616 Chapter 1 Introduction to Data Communications FIGURE 1-5 Some common data communications standards. HTML = Hypertext Markup Language; HTTP = Hypertext Transfer Protocol; IMAP = Internet Message Access Protocol; IP = Internet Protocol; LAN = local area network; MPEG = Motion Picture Experts Group; POP = Post Office Protocol; TCP = Transmission Control ProtocolLayerCommon Standards5. Application layerHTTP, HTML (Web) MPEG, H.323 (audio/video) SMTP, IMAP, POP (e-mail)4. Transport layerTCP (Internet and LANs)3. Network layerIP (Internet and LANs)2. Data link layerEthernet (LAN) Frame relay (WAN) T1 (MAN and WAN)1. Physical layerRS-232C cable (LAN) Category 5 cable (LAN) V.92 (56 Kbps modem)1.4.3 Common Standards There are many different standards used in networking today. Each standard usually covers one layer in a network. Some of the most commonly used standards are shown in Figure 1-5. At this point, these models are probably just a maze of strange names and acronyms to you, but by the end of the book, you will have a good understanding of each of these. Figure 1-5 provides a brief road map for some of the important communication technologies we discuss in this book. For now, there is one important message you should understand from Figure 1-5: For a network to operate, many different standards must be used simultaneously. The sender of a message must use one standard at the application layer, another one at the transport layer, another one at the network layer, another one at the data link layer, and another one at the physical layer. Each layer and each standard is different, but all must work together to send and receive messages. Either the sender and receiver of a message must use the same standards or, more likely, there are devices between the two that translate from one standard into another. Because different networks often use software and hardware designed for different standards, there is often a lot of translation between different standards.1.5 FUTURE TRENDS The field of data communications has grown faster and become more important than computer processing itself. Both go hand in hand, but we have moved from the computer era to the communication era. Three major trends are driving the future of communications and networking.1.5.1 Wireless LAN and BYOD The rapid development of mobile devices, such as smart phones and tablets, has encouraged employers to allow their employees to bring these devices to work and use them to access data, such as their work email. This movement, called bring your own device, or BYOD, is a great way to get work quickly, saves money, and makes employees happy. But BYOD also brings its own problems. Employers need to add or expand their Wireless Local Area Networks (WLANS) to support all these new devices.Trimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 17Future Trends 17Another important problem is security. Employees bring these devices to work so that they can access not only their email but also other critical company assets, such as information about their clients, suppliers, or sales. Employers face myriad decisions about how to manage access to company applications for BYOD. Companies can adopt two main approaches: (1) native apps or (2) browser-based technologies. Native apps require an app to be developed for each application that an employee might be using for every potential device that the employee might use (e.g., iPhone, Android, Windows). The browser-based approach (often referred to as responsive design using HTML5) doesn’t create an app but rather requires employees to access the application through a Web browser. Both these approaches have their pros and cons, and only the future will show which one is the winner. What if an employee loses his or her mobile phone or tablet so that the application that accesses critical company data now can be used by anybody who finds the device? Will the company’s data be compromised? Device and data loss practices now have to be added to the general security practices of the company. Employees need to have apps to allow their employer to wipe their phones clean in case of loss so that no company data are compromised (e.g., SOTI’s MobiControl). In some cases, companies require the employee to allow monitoring of the device at all times, to ensure that security risks are minimized. However, some argue that this is not a good practice because the device belongs to the employee, and monitoring it 24/7 invades the employee’s privacy.1.5.2 The Web of Things Telephones and computers used to be separate. Today voice and data have converged into unified communications, with phones plugged into computers or directly into the LAN using Voice over Internet Protocol (VOIP). Vonage and Skype have taken this one step further and offer telephone service over the Internet at dramatically lower prices than traditional separate landline phones, whether from traditional phones or via computer microphones and speakers. Computers and networks can also be built into everyday things, such as kitchen appliances, doors, and shoes. In the future, the Web will move from being a Web of computers to also being a Web of Things with which we interact using a computer. All this interaction will happen seamlessly, without human intervention. And we will get used to seeing our shoes tell us how far we walked, our refrigerator telling us what food we need to buy, and our locks opening and closing without physical keys and telling us who entered and left at what times. The Web of Things is already under way. For example, Microsoft has an Envisioning Center that focuses on creating the future of work and play (it is open to the public). At the Envisioning Center, a person can communicate with his or her colleagues through digital walls that enable the person to visualize projects through simulation and then rapidly move to execution of ideas. In the home of the future, anyone can, for example, be a chef and adapt recipes based on dietary needs or ingredients in the pantry (see Figure 1-6) through the use of Kinect technology. Google is another leading innovator in the Web of Things. Google has been developing a self-driving car for several years. This self-driving car not only passes a standard driving test but also spends less time in near-collision states on public roads in California and Nevada. Of course, for such a car to appear in other states, technology has to be installed that allows the car to “see” the road. Other car developers started installing computer technology that not only parallel parks the car but also applies brakes to avoid collisions.1.5.3 Massively Online You have probably heard of massively multiplayer online games, such as World of Warcraft, where you can play with thousands of players in real time. Well, today not only gamesTrimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 1818 Chapter 1 Introduction to Data Communications FIGURE 1-6 Microsoft’s Envisioning Center—Smart Stovetop that helps you cook without getting in your way Source: Smart Stovetop, Microsofts Envisioning Center, Used with permission by Microsoft.are massively online. Education is massively online. Khan Academy, Lynda.com, or Code Academy have Web sites that offer thousands of education modules for children and adults in myriad fields to help them learn. Your class very likely also has an online component. You may even use this textbook online and decide whether your comments are for you only, for your instructor, or for the entire class to read. In addition, you may have heard about massive open online courses, or MOOC. MOOC enable students who otherwise wouldn’t have access to elite universities to get access to top knowledge without having to pay the tuition. These classes are offered by universities, such as Stanford, UC Berkeley, MIT, UCLA, and Carnegie Mellon, free of charge and for no credit (although at some universities, you can pay and get credit toward your degree). Politics has also moved massively online. President Obama reached out to the crowds and ordinary voters not only through his Facebook page but also through Reddit and Google Hangouts. Many other politicians use social computing to reach potential voters. Finally, massively online allows activists to reach masses of people in a very short period of time to initiate change. Examples of use of YouTube videos or Facebook for activism include the Arab Spring, Kony 2012, or the use of sarin gas in Syria. So what started as a game with thousands of people being online at the same time is being reinvented for good use in education, politics, and activism. Only the future will show what humanity can do with what massively online has to offer. What these three trends have in common is that there will be an increasing demand for professionals who understand development of data communications and networking infrastructure to support this growth. There will be more and more need to build faster and more secure networks that will allow individuals and organizations to connect to resources, probably stored on cloud infrastructure (either private or public). This need will call not only for engineers who deeply understand the technical aspects of networks but also for highly social individuals who embrace technology in creative ways to allow business to achieve a competitive edge through utilizing this technology. So the call is for you who are reading this book—you are in the right place at the right time!1.6 IMPLICATIONS FOR MANAGEMENT At the end of each chapter, we provide key implications for management that arise from the topics discussed in the chapter. We draw implications that focus on improving the management of networks and information systems as well as implications for the management of the organization as a whole.Trimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 19Summary 19FIGURE 1-7 One server farm with more than 1,000 servers Source: © zentilia/iStockphotoThere are three key implications for management from this chapter. First, networks and the Internet change almost everything. The ability to quickly and easily move information from distant locations and to enable individuals inside and outside the firm to access information and products from around the world changes the way organizations operate, the way businesses buy and sell products, and the way we as individuals work, live, play, and learn. Companies and individuals who embrace change and actively seek to apply networks and the Internet to better improve what they do will thrive; companies and individuals who do not will gradually find themselves falling behind. Second, today’s networking environment is driven by standards. The use of standard technology means an organization can easily mix and match equipment from different vendors. The use of standard technology also means that it is easier to migrate from older technology to a newer technology, because most vendors designed their products to work with many different standards. The use of a few standard technologies rather than a wide range of vendor-specific proprietary technologies also lowers the cost of networking because network managers have fewer technologies they need to learn about and support. If your company is not using a narrow set of industry-standard networking technologies (whether those are de facto standards such as Windows, open standards such as Linux, or de jure standards such as 802.11n wireless LANs), then it is probably spending too much money on its networks. Third, as the demand for network services and network capacity increases, so too will the need for storage and server space. Finding efficient ways to store all the information we generate will open new market opportunities. Today, Google has almost a million Web servers (see Figure 1-7). If we assume that each server costs an average of $1000, the money large companies spend on storage is close to $1Billion. Capital expenditure of this scale is then increased by money spent on power and staffing. One way companies can reduce this amount of money is to store their data using cloud computing.SUMMARYIntroduction The information society, where information and intelligence are the key drivers of personal, business, and national success, has arrived. Data communications is the principal enabler of the rapid information exchange and will become more importantTrimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 2020 Chapter 1 Introduction to Data Communications than the use of computers themselves in the future. Successful users of data communications, such as Wal-Mart, can gain significant competitive advantage in the marketplace.Network Definitions A local area network (LAN) is a group of computers located in the same general area. A backbone network (BN) is a large central network that connects almost everything on a single company site. A metropolitan area network (MAN) encompasses a city or county area. A wide area network (WAN) spans city, state, or national boundaries.Network Model Communication networks are often broken into a series of layers, each of which can be defined separately, to enable vendors to develop software and hardware that can work together in the overall network. In this book, we use a five-layer model. The application layer is the application software used by the network user. The transport layer takes the message generated by the application layer and, if necessary, breaks it into several smaller messages. The network layer addresses the message and determines its route through the network. The data link layer formats the message to indicate where it starts and ends, decides when to transmit it over the physical media, and detects and corrects any errors that occur in transmission. The physical layer is the physical connection between the sender and receiver, including the hardware devices (e.g., computers, terminals, and modems) and physical media (e.g., cables and satellites). Each layer, except the physical layer, adds a Protocol Data Unit (PDU) to the message.Standards Standards ensure that hardware and software produced by different vendors can work together. A de jure standard is developed by an official industry or a government body. De facto standards are those that emerge in the marketplace and are supported by several vendors but have no official standing. Many different standards and standards-making organizations exist. Future Trends At the same time as the use of BYOD offers efficiency at the workplace, it opens up the doors for security problems that companies need to consider. Our interactions with colleagues and family will very likely change in the next 5–10 years because of the Web of Things, where devices will interact with each other without human intervention. Finally, massively online not only changed the way we play computer games but also showed that humanity can change its history.KEY TERMS American National Standards Institute (ANSI), 14 application layer, 10 backbone network (BN), 7 cable, 5 circuit, 5 client, 5 data link layer, 10 extranet, 7 file server, 5 hardware layer, 10 Institute of Electrical and Electronics Engineers (IEEE), 14International Telecommunications Union-Telecommunications Group (ITU-T), 14 International Telecommunications Union—Telecommunications Group (ITU-T) 14 Internet Engineering Task Force (IETF), 15 Internet model, 9 Internet service provider (ISP), 1 internetwork layer, 10intranet, 7 layers, 7 local area network (LAN), 6 network layer, 10 Open Systems Interconnection Reference model (OSI model), 8 peer-to-peer network, 5 physical layer, 9 print server, 5 protocol, 10 Protocol Data Unit (PDU), 11protocol stack, 13 Request for Comment (RFC), 15 router, 5 server, 4 standards, 13 transport layer, 10 Web server, 5 wide area network (WAN), 7Trimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 21Minicases 21QUESTIONS 1. How can data communications networks affect businesses? 2. Discuss three important applications of data communications networks in business and personal use. 3. How do local area networks (LANs) differ from wide area networks (WANs) and backbone networks (BNs)? 4. What is a circuit? 5. What is a client? 6. What is a server? 7. Why are network layers important? 8. Describe the seven layers in the OSI network model and what they do. 9. Describe the five layers in the Internet network model and what they do. 10. Explain how a message is transmitted from one computer to another using layers. 11. Describe the three stages of standardization. 12. How are Internet standards developed? 13. Describe two important data communications standards-making bodies. How do they differ? 14. What is the purpose of a data communications standard?15. What are three of the largest interexchange carriers (IXCs) in North America? 16. Discuss three trends in communications and networking. 17. Why has the Internet model replaced the Open Systems Interconnection Reference (OSI) model? 18. In the 1980s, when we wrote the first edition of this book, there were many, many more protocols in common use at the data link, network, and transport layers than there are today. Why do you think the number of commonly used protocols at these layers has declined? Do you think this trend will continue? What are the implications for those who design and operate networks? 19. The number of standardized protocols in use at the application layer has significantly increased since the 1980s. Why? Do you think this trend will continue? What are the implications for those who design and operate networks? 20. How many bits (not bytes) are there in a 10-page text document? Hint: There are approximately 350 words on a double-spaced page.EXERCISES A. Investigate the long-distance carriers (interexchange carriers [IXCs]) and local exchange carriers (LECs) in your area. What services do they provide, and what pricing plans do they have for residential users? B. Discuss the issue of communications monopolies and open competition with an economics instructor and relate his or her comments to your data communication class. C. Find a college or university offering a specialized degree in telecommunications or data communications and describe the program. D. Describe a recent data communication development you have read about in a newspaper or magazine and how it may affect businesses.E. Investigate the networks in your school or organization. Describe the important local area networks (LANs) and backbone networks (BNs) in use (but do not describe the specific clients, servers, or devices on them). F. Use the Web to search the Internet Engineering Task (IETF) Web site (www.ietf.org). Describe one standard that is in the request for comment (RFC) stage. G. Discuss how the revolution/evolution of communications and networking is likely to affect how you will work and live in the future. H. Investigate the pros and cons of developing native apps versus taking a browser-based approach.MINICASES I. Global Consultants John Adams is the chief information officer (CIO) of Global Consultants (GC), a very large consulting firm with offices in more than100 countries around the the world. GC is about to purchase a set of several Internet-based financial software packages that will be installed in all of theirTrimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 2222 Chapter 1 Introduction to Data Communications offices. There are no standards at the application layer arrive from the manufacturers and are stored in the for financial software but several software companies warehouses until they are picked and put on a truck that sell financial software (call them group A) use for delivery to their customers. The networking equipone de facto standard to enable their software to work ment in their warehouses is old and is starting to with one another’s software. However, another group give them problems; these problems are expected to of financial software companies (call them group B) increase as the equipment gets older. The vice president use a different de facto standard. Although both groups of operations, Pat McDonald, would like to replace have software packages that GC could use, GC would the existing LANs and add some new wireless LAN really prefer to buy one package from group A for one technology into all the warehouses, but he is contype of financial analysis and one package from group cerned that now may not be the right time to replace B for a different type of financial analysis. The probthe equipment. He has read several technology forelem, of course, is that then the two packages cannot casts that suggest there will be dramatic improvements communicate and GC’s staff would end up having to in networking speeds over the next few years, espetype the same data into both packages. The alternative cially in wireless technologies. He has asked you for is to buy two packages from the same group—so that advice about upgrading the equipment. Should Condata could be easily shared—but that would mean havsolidated Supplies replace all the networking equiping to settle for second best for one of the packages. ment in all the warehouses now, should it wait until Although there have been some reports in the press newer networking technologies are available, or should about the two groups of companies working together it upgrade some of the warehouses this year, some next to develop one common standard that will enable softyear, and some the year after, so that some warehouses ware to work together, there is no firm agreement yet. will benefit from the expected future improvements in What advice would you give Adams? networking technologies? II. Atlas Advertising Atlas Advertising is a regional IV. Asia Importers Caisy Wong is the owner of a small advertising agency with offices in Boston, New York, catalog company that imports a variety of clothes Providence, Washington, D.C., and Philadelphia. 1. and houseware from several Asian countries and sells Describe the types of networks you think they would them to its customers over the Web and by telehave (e.g., LANs, BNs, WANs) and where they are phone through a traditional catalog. She has read about likely to be located. 2. What types of standard protothe convergence of voice and data and is wondering cols and technologies do you think they are using at about changing her current traditional, separate, and each layer (e.g., see Figure 1-5)? rather expensive telephone and data services into one III. Consolidated Supplies Consolidated Supplies is a service offered by a new company that will supply medium-sized distributor of restaurant supplies that both telephone and data over her Internet connection. operates in Canada and several northern U.S. states. What are the potential benefits and challenges that They have 12 large warehouses spread across both Asia Importers should consider in making the decision countries to service their many customers. Products about whether to move to one integrated service?CASE STUDY NEXT-DAY AIR SERVICE See the book companion site at www.wiley.com/college/fitzgerald.HANDS-ON ACTIVITY 1A Convergence at Home We talked about the convergence of voice, video, and data into unified communications. The objective of this Activity is for you to experience this convergence.1. Yahoo! Instant Messenger is one of the many tools that permit the convergence of voice, video, and text data over the Internet. Use your browser toTrimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 23Minicases 23FIGURE 1-8 Voice, video, and data in Yahoo! Instant Messenger connect to messenger.yahoo.com and sign up for Yahoo! Instant Messenger, then download and install it—or use the tool of your choice (Skype is another good tool). Buy an inexpensive Webcam with a built-in microphone.make everyone feel closer. If you want to feel even closer, connect to them and just leave the voice and video on while you do your homework; no need to talk, just spend time together online.2. Get your parents to do the same.Deliverable3. Every weekend, talk to your parents using IM text, voice, and video (see Figure 1-8). It’s free, so there’s no phone bill to worry about, and the video willA log of your conversations showing the date and time of the conversation, the person(s) you spoke with, and how long the conversation lasted.HANDS-ON ACTIVITY 1B Seeing the PDUs in Your Messages We talked about how messages are transferred using layers and the different Protocol Data Units (PDUs) used at each layer. The objective of this Activity is for you to see the different PDUs in the messages that you send. To do this, we’ll use Wireshark, which is one of the world’s foremost network protocol analyzers, and is the de facto standard that most professional and education institutions use today. It is used for network troubleshooting, network analysis, software and communications protocol development, and general education about how networks work.Wireshark enables you to see all messages sent by your computer, as well as some or all of the messages sent by other computers on your LAN, depending on how your LAN is designed. Most modern LANs are designed to prevent you from eavesdropping on other computer’s messages, but some older ones still permit this. Normally, your computer will ignore the messages that are not addressed for your computer, but Wireshark enables you to eavesdrop and read messages sent to and from other computers.Trimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 2424 Chapter 1 Introduction to Data Communications This is the Filter toolbarFIGURE 1-9 Wireshark capture Wireshark is free. Before you start this activity, download and install it from www.wireshark.org. 1. Start Wireshark. 2. Click on Capture and then Interfaces. Click the Start button next to the active interface (the one that is receiving and sending packets). Your network data will be captured from this moment on. 3. Open your browser and go to a Web page that you have not visited recently (a good one is www.iana.org). 4. Once the Web page has loaded, go back to Wireshark and stop the packet capture by clicking on Capture and then Stop (the hot key for this is Ctrl + E). 5. You will see results similar to those in Figure 1-9. There are three windows below the tool bar: a. The top window is the Packet List. Each line represents a single message or packet that was captured by Wireshark. Different types of packets will have different colors. For example, HTTP packets are colored green. Depending on how busy your network is, you may see a small number of packets in this window or a very large number of packets.b. The middle window is the Packet Detail. This will show the details for any packet you click on in the top window. c. The bottom window shows the actual contents of the packet in hexadecimal format, so it is usually hard to read. This window is typically used by network programmers to debug errors. 6. Let’s take a look at the packets that were used to request the Web page and send it to your computer. The application layer protocol used on the Web is HTTP, so we’ll want to find the HTTP packets. In the Filter toolbar, type http and hit enter. 7. This will highlight all the packets that contain HTTP packets and will display the first one in Packet Detail window. Look at the Packet Detail window in Figure 1-9 to see the PDUs in the message we’ve highlighted. You’ll see that it contains an Ethernet II Frame, an IP packet, a TCP segment, and an HTTP packet. You can see inside any or all of these PDUs by clicking on the +box in front of them. In Figure 1-9, you’ll see that we’ve clicked the +box in front of the HTTP packet to show you what’s inside it.Trimsize Trim Size: 8in x 10inFitzergald c01.tex V2 - July 25, 2014 10:04 A.M. Page 25Hands-On Activity 1B 25Deliverables 1. List the PDU at layers 2, 3, and 4 that were used to transmit your HTTP GET packet.GET packets, so you’ll have to look through them to answer this question.a. Locate your HTTP Get packet in the Packet List and click on it. b. Look in the Packet Detail window to get the PDU information.3. List at least five other protocols that Wireshark displayed in the Packet List window. You will need to clear the filter by clicking on the “Clear” icon that is on the right of the Filter toolbar.2. How many different HTTP GET packets were sent by your browser? Not all the HTTP packets areTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 26PART TWO FUNDAMENTAL CONCEPTSCHAPTER 2 APPLICATION LAYER he application layer (also called layer 5) is the software that enables the user to T perform useful work. The software at the application layer is the reason for having the network because it is this software that provides the business value. This chapter examines the five fundamental types of application architectures used at the application layer (host-based, client-based, client-server, cloud-based, and peer-to-peer). It then looks at the Internet and the primary software application packages it enables: the Web, email, Telnet, and instant messaging.OBJECTIVESOUTLINE◾ ◾ ◾ ◾Understand host-based, client-based, client-server, and cloud-based application architectures Understand how the Web works Understand how email works Be aware of how Telnet and instant messaging work2.1 Introduction 2.2 Application Architectures 2.2.1 Host-Based Architectures 2.2.2 Client-Based Architectures 2.2.3 Client-Server Architectures 2.2.4 Cloud Computing Architectures 2.2.5 Peer-to-Peer Architectures 2.2.6 Choosing Architectures 2.3 World Wide Web 2.3.1 How the Web Works 2.3.2 Inside an HTTP Request 2.3.3 Inside an HTTP Response2.4 Electronic Mail 2.4.1 How Email Works 2.4.2 Inside an SMTP Packet 2.4.3 Attachments in Multipurpose Internet Mail Extension 2.5 Other Applications 2.5.1 Telnet 2.5.2 Instant Messaging 2.5.3 Videoconferencing 2.6 Implications for Management Summary2.1 INTRODUCTION Network applications are the software packages that run in the application layer. You should be quite familiar with many types of network software, because it is these application packages that you use when you use the network. In many respects, the only reason for having a network is to enable these applications. In this chapter, we first discuss five basic architectures for network applications and how each of those architectures affects the design of networks. Because you probably have a good understanding of applications such as the Web and word processing, we will use those as examples of different application architectures. We then examine several common applications used on the Internet (e.g., Web, email) and use those to explain how application software interacts with the networks. By the end of this chapter, you should have a much better understanding of the application layer in the network model and what exactly we meant when we used the term protocol data unit in Chapter 1. 26Trimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 27Application Architectures 272.2 APPLICATION ARCHITECTURES In Chapter 1, we discussed how the three basic components of a network (client computer, server computer, and circuit) worked together. In this section, we will get a bit more specific about how the client computer and the server computer can work together to provide application software to the users. An application architecture is the way in which the functions of the application layer software are spread among the clients and servers in the network. The work done by any application program can be divided into four general functions. The first is data storage. Most application programs require data to be stored and retrieved, whether it is a small file such as a memo produced by a word processor or a large database such as an organization’s accounting records. The second function is data access logic, the processing required to access data, which often means database queries in SQL (structured query language). The third function is the application logic (sometimes called business logic), which also can be simple or complex, depending on the application. The fourth function is the presentation logic, the presentation of information to the user and the acceptance of the user’s commands. These four functions—data storage, data access logic, application logic, and presentation logic—are the basic building blocks of any application. There are many ways in which these four functions can be allocated between the client computers and the servers in a network. There are five fundamental application architectures in use today. In host-based architectures, the server (or host computer) performs virtually all of the work. In client-based architectures, the client computers perform most of the work. In client-server architectures, the work is shared between the servers and clients. In cloud-based architectures, the cloud provides services (software, platform, and/or infrastructure) to the client. In peer-to-peer architectures, computers are both clients and servers and thus share the work. Although the client-server architecture is the dominant application architecture, cloud-based architecture is becoming the runner-up because it offers rapid scalability and deployability of computer resources.TECHNICAL2-1 Cloud Computing Deployment ModelsFOCUS When an organization decides to use cloud-based architecture, it needs to decide on which deployment model will it use. There are three deployment models from which to choose: • Private cloud As the name suggests, private clouds are created for the exclusive use of a single private organization. The cloud (hardware and software) would be hosted by the organization in a private data center. This deployment model provides the highest levels of control, privacy, and security. This model is often used by organizations needing to satisfy regulations posed by regulators, such as in the financial and health care industries. • Public cloud This deployment model is used by multiple organizations that share the same cloud resources. The level of control is lower than in private clouds, and many companies are concerned with the security of their data. However, this deployment model doesn’t require any upfront capital investment, and the selected service can be up and running in a few days. Public clouds are a goodchoice when a lot of people in the organization are using the same application. Because of this, the most frequently used software as a service (SaaS) is email. For example, many universities have moved to this model for their students. • Community cloud This deployment model is used by organizations that have a common purpose. Rather than each organization creating its own private cloud, organizations decide to collaborate and pool their resources. Although this cloud is not private, only a limited number of companies have access to it. Community clouds are considered to be a subset of public clouds. Therefore, community clouds realize the benefits from cloud infrastructure (such as speed of deployment) with the added level of privacy and security that private clouds offer. This deployment model is often used in the government, health care, and finance industries, members of which have similar application needs and require a very high level of security. Sometimes an organization will choose to use only one of these deployment models for all itsTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 2828 Chapter 2 Application Layercloud-based applications. This strategy is called a pure strategy, such as a pure private cloud strategy or a pure public cloud strategy. In other cases, the organization is best supported by a mix of public, private, and community clouds for different applications. This strategy is called a hybrid cloud strategy. A hybrid cloud strategy allows the organization to take advantage of the benefits that these different cloud deployment models offer. Forexample, a hospital can use Gmail for its email application (public cloud) but a private cloud for patient data, which require high security. The downside of a hybrid cloud strategy is that an organization has to deal with different platforms and cloud providers. However, the truth is that this strategy offers the greatest flexibility, so most organizations eventually end up with this strategy.2.2.1 Host-Based Architectures The very first data communications networks developed in the 1960s were host-based, with the server (usually a large mainframe computer) performing all four functions. The clients (usually terminals) enabled users to send and receive messages to and from the host computer. The clients merely captured keystrokes, sent them to the server for processing, and accepted instructions from the server on what to display (see Figure 2-1). This very simple architecture often works very well. Application software is developed and stored on the one server along with all data. If you’ve ever used a terminal, you’ve used a host-based application. There is one point of control, because all messages flow through the one central server. In theory, there are economies of scale, because all computer resources are centralized (but more on cost later). There are two fundamental problems with host-based networks. First, the server must process all messages. As the demands for more and more network applications grow, many servers become overloaded and unable to quickly process all the users’ demands. Prioritizing users’ access becomes difficult. Response time becomes slower, and network managers are required to spend increasingly more money to upgrade the server. Unfortunately, upgrades to the mainframes that usually are the servers in this architecture are “lumpy.” That is, upgrades come in large increments and are expensive (e.g., $500,000); it is difficult to upgrade “a little.”2.2.2 Client-Based Architectures In the late 1980s, there was an explosion in the use of personal computers. Today, more than 90% of most organizations’ total computer processing power now resides on personal computers, not in centralized mainframe computers. Part of this expansion was fueled by a number of low-cost, highly popular applications such as word processors, spreadsheets, and presentation graphics programs. It was also fueled in part by managers’ frustrations with application software on host mainframe computers. Most mainframe software is not as easy to use as personal computer software, is far more expensive, and can take years to develop. In the late 1980s, many large organizations had application development backlogs of FIGURE 2-1 Host-based architectureClient (terminal)Server (mainframe computer)Presentation logic Application logic Data access logic Data storageTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 29Application Architectures 29FIGURE 2-2 Client-based architectureClient (personal computer)Presentation logic Application logic Data access logicServer (personal computer)Data storage2 to 3 years; that is, getting any new mainframe application program written would take years. New York City, for example, had a 6-year backlog. In contrast, managers could buy personal computer packages or develop personal computer-based applications in a few months. With client-based architectures, the clients are personal computers on a LAN, and the server is usually another personal computer on the same network. The application software on the client computers is responsible for the presentation logic, the application logic, and the data access logic; the server simply stores the data (Figure 2-2). This simple architecture often works very well. If you’ve ever used a word processor and stored your document file on a server (or written a program in Visual Basic or C that runs on your computer but stores data on a server), you’ve used a client-based architecture. The fundamental problem in client-based networks is that all data on the server must travel to the client for processing. For example, suppose the user wishes to display a list of all employees with company life insurance. All the data in the database (or all the indices) must travel from the server where the database is stored over the network circuit to the client, which then examines each record to see if it matches the data requested by the user. This can overload the network circuits because far more data are transmitted from the server to the client than the client actually needs.2.2.3 Client-Server Architectures Most applications written today use client-server architectures. Client-server architectures attempt to balance the processing between the client and the server by having both do some of the logic. In these networks, the client is responsible for the presentation logic, whereas the server is responsible for the data access logic and data storage. The application logic may either reside on the client, reside on the server, or be split between both. Figure 2-3 shows the simplest case, with the presentation logic and application logic on the client and the data access logic and data storage on the server. In this case, the client software accepts user requests and performs the application logic that produces database requests that are transmitted to the server. The server software accepts the database requests, performs the data access logic, and transmits the results to the client. The client software accepts the results and presents them to the user. When you used a Web browser to get pages from a Web server, you used a client-server architecture. Likewise, if you’ve ever written a program that uses SQL to talk to a database on a server, you’ve used a client-server architecture. For example, if the user requests a list of all employees with company life insurance, the client would accept the request, format it so that it could be understood by the server, and transmit it to the server. On receiving the request, the server searches the database for all requested records and then transmits only the matching records to the client, which would then present them to the user. The same would be true for database updates; the client accepts the request and sends it to the server. The server processes the update and responds (either accepting the update or explaining why not) to the client, which displays it to the user. One of the strengths of client-server networks is that they enable software and hardware from different vendors to be used together. But this is also one of their disadvantages,Trimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 3030 Chapter 2 Application Layer because it can be difficult to get software from different vendors to work together. One solution to this problem is middleware, software that sits between the application software on the client and the application software on the server. Middleware does two things. First, it provides a standard way of communicating that can translate between software from different vendors. Many middleware tools began as translation utilities that enabled messages sent from a specific client tool to be translated into a form understood by a specific server tool. The second function of middleware is to manage the message transfer from clients to servers (and vice versa) so that clients need not know the specific server that contains the application’s data. The application software on the client sends all messages to the middleware, which forwards them to the correct server. The application software on the client is therefore protected from any changes in the physical network. If the network layout changes (e.g., a new server is added), only the middleware must be updated. There are literally dozens of standards for middleware, each of which is supported by different vendors and each of which provides different functions. Two of the most important standards are Distributed Computing Environment (DCE) and Common Object Request Broker Architecture (CORBA). Both of these standards cover virtually all aspects of the client-server architecture but are quite different. Any client or server software that conforms to one of these standards can communicate with any other software that conforms to the same standard. Another important standard is Open Database Connectivity (ODBC), which provides a standard for data access logic. Two-Tier, Three-Tier, and n-Tier Architectures There are many ways in which the application logic can be partitioned between the client and the server. The example in Figure 2-3 is one of the most common. In this case, the server is responsible for the data and the client, the application and presentation. This is called a two-tier architecture, because it uses only two sets of computers, one set of clients and one set of servers. A three-tier architecture uses three sets of computers, as shown in Figure 2-4. In this case, the software on the client computer is responsible for presentation logic, an application server is responsible for the application logic, and a separate database server is responsible for the data access logic and data storage. N-tier architecture uses more than three sets of computers. In this case, the client is responsible for presentation logic, a database server is responsible for the data access logic and data storage, and the application logic is spread across two or more different sets of servers. Figure 2-5 shows an example of an n-tier architecture of a groupware product called TCB Works developed at the University of Georgia. TCB Works has four major components. The first is the Web browser on the client computer that a user uses to access the system and enter commands (presentation logic). The second component is a Web server that responds to the user’s requests, either by providing HTML pages and graphics (application logic) or by sending the request to the third component, a set of 28 C programs that perform various functions such as adding comments or voting (application logic). The fourth component is a database server that stores all the data (data access logic and data storage). Each of these four components is separate, making it easy to spread the different components on different servers and to partition the application logic on two different servers. FIGURE 2-3 Two-tier client-server architectureClient (personal computer)Server (personal computer, server farm or mainframe)Presentation logic Application logicData access logic Data storageTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 31Application Architectures 31FIGURE 2-4 Three-tier client-server architectureClient (personal computer)Application server (personal computer)Presentation logicApplication logicDatabase server (personal computer, server farm or mainframe)Data access logic Data storageThe primary advantage of an n-tier client-server architecture compared with a two-tier architecture (or a three-tier compared with a two-tier) is that it separates the processing that occurs to better balance the load on the different servers; it is more scalable. In Figure 2-5, we have three separate servers, which provides more power than if we had used a two-tier architecture with only one server. If we discover that the application server is too heavily loaded, we can simply replace it with a more powerful server, or even put in two application servers. Conversely, if we discover the database server is underused, we could put data from another application on it. There are two primary disadvantages to an n-tier architecture compared with a two-tier architecture (or a three-tier with a two-tier). First, it puts a greater load on the network. If you compare Figures 2-3, 2-4, and 2-5, you will see that the n-tier model requires more communication among the servers; it generates more network traffic so you need a higher capacity network. Second, it is much more difficult to program and test software in n-tier architectures than in two-tier architectures because more devices have to communicate to complete a user’s transaction. Thin Clients versus Thick Clients Another way of classifying client-server architectures is by examining how much of the application logic is placed on the client computer. A thin-client approach places little or no application logic on the client (e.g., Figure 2-5), whereas a thick-client (also called fat-client) approach places all or almost all of the application logic on the client (e.g., Figure 2-3). There is no direct relationship between thin and fat client and two-, three- and n-tier architectures. For example, Figure 2-6 shows a typical Web architecture: a two-tier architecture with a thin client. One of the biggest forces favoring thin clients is the Web. Thin clients are much easier to manage. If an application changes, only the server with the application logic needs to be updated. With a thick client, the software on all of the FIGURE 2-5 The n-tier client-server architectureClient (personal computer)Web server (personal computer or server farm)Presentation logicApplication logicApplication server (personal computer or server farm)Application logicDatabase server (personal computer, server farm or mainframe)Data access logic Data storageTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 3232 Chapter 2 Application Layer FIGURE 2-6 The typical two-tier thin-client architecture of the WebWeb server (personal computer or mainframe)Client (personal computer)Presentation logicApplication logic Data access logic Data storageclients would need to be updated. Conceptually, this is a simple task; one simply copies the new files to the hundreds of affected client computers. In practice, it can be a very difficult task. Thin-client architectures are the future. More and more application systems are being written to use a Web browser as the client software, with Java Javascriptor AJAX (containing some of the application logic) downloaded as needed. This application architecture is sometimes called the distributed computing model. The thin-client architecture also enables cloud-based architecture, which is discussed next.2.2.4 Cloud Computing Architectures The traditional client-server architecture can be complicated and expensive to deploy. Every application has to be hosted on a server so that it can fulfill requests from potentially thousands of clients. An organization has hundreds of applications, so running a successful client-server architecture requires a variety of software and hardware and the skilled personnel who can build and maintain this architecture. Cloud computing architectures are different because they outsource part or all of the infrastructure to other firms that specialize in managing that infrastructure. There are three common cloud-based architecture models. Figure 2-7 summarizes these three models and compares them to the client-server architecture. The first column of this figure shows the thin-client client-server architecture, in which the organization manages the entire application software and hardware. In addition to the software components we’ve discussed previously (the application logic, data access logic, and the data themselves), the servers need an operating system (e.g., Windows, Linux). Most companies also use virtualization software to install many virtual or logical servers on theFIGURE 2-7 Cloud architecture models compared to thin-client client-server architecture Source: Adapted from www.cbc.radio-canada.ca/ en/reporting-to-canadians/ sync/sync-issue-1-2012/ cloud-servicesThin-Client Client-Server Who manages which partsInfrastructure as a ServicePlatform as a ServiceSoftware as a ServiceInternal Outsourced Internal Outsourced Internal Outsourced Internal OutsourcedApplication LogicXXXXData StorageXXXXData Access LogicXXXXOperating SystemXXXXVirtualization SoftwareXXXXServer HardwareXXXXStorage HardwareXXXXNetwork HardwareXXXXTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 33Application Architectures 33FIGURE 2-8 One row of a server farm at Indiana University Source: Courtesy of the author, Alan Dennissame physical computer. This software (VMware is one of the leaders) creates a separate partition on the physical server for each of the logical servers. Each partition has its own operations system and its own server software and works independently from the other partitions. This software must run on some hardware, which includes a server, a storage device, and the network itself. The server may be a large computer or a server farm. A server farm is a cluster of computers linked together so that they act as one computer. Requests arrive at the server farm (e.g., Web requests) and are distributed among the computers so that no one computer is overloaded. Each computer is separate so that if one fails, the sever farm simply bypasses it. Server farms are more complex than single servers because work must be quickly coordinated and shared among the individual computers. Server farms are very scalable because one can always add another computer. Figure 2-8 shows one row of a server farm at Indiana University. There are seven more rows like this one in this room, and another room contains about the same number. Many companies use separate storage devices instead of the hard disks in the servers themselves. These storage devices are special-purpose hard disks designed to be very large and very fast. The six devices on the left of Figure 2-8 comprise a special storage device called a storage area network (SAN). Software as a Service (SaaS) SaaS is one of the three cloud computing models. With SaaS, an organization outsources the entire application to the cloud provider (see the last column of Figure 2-7) and uses it as any other application that is available via a browser (thin client). SaaS is based on multitenancy. This means that rather than having many copies of the same application, there is only one application that everybody shares, yet everybody can customize it for his or her specific needs. Imagine a giant office building in which all people share the infrastructure (water, A/C, electricity) but can customize the offices they are renting. The customers can customize the app and don’t have to worry about upgrades, security, or underlying infrastructure because the cloud provider does it all. The most frequently used SaaS application is email. At Indiana University, all student email is outsourced to Google’s Gmail. Customer relationship management (CRM) from Salesforce.com is another very commonly used SaaS.Trimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 3434 Chapter 2 Application Layer Platform as a Service (PaaS) PaaS is another of the three cloud computing models. What if there is an application you need but no cloud provider offers one you like? You can build your own application and manage your own data on the cloud infrastructure provided by your cloud supplier. This model is called Platform as a Service (PaaS). The developers in your organization decide what programming language to use to develop the application of choice. The needed hardware and software infrastructure, called the platform, is rented from the cloud provider (see Figure 2-7). In this case, the organization manages the application and its own data but uses the database software (data access logic) and operating system provided by the cloud provider. PaaS offers a much faster development and deployment of custom applications at a fraction of the cost required for the traditional client-server architecture. PaaS providers include Amazon Elastic Cloud Compute (EC2), Microsoft Windows Azure, and Google App Engine. Infrastructure as a Service (IaaS) As you can see in Figure 2-7, with IaaS, the cloud provider manages the hardware, including servers, storage, and networking components. The organization is responsible for all the software, including operating system (and virtualization software), database software, and its applications and data. IaaS is sometimes referred to also as HaaS, or Hardware as a Service, because in this cloud model, only the hardware is provided; everything else is up to the organization. This model allows a decrease in capital expenditures for hardware and maintaining the proper environment (e.g., cooling) and redundancy, and backups for data and applications. Providers of IaaS are Amazon Web Services, Microsoft Windows Azure, and Akamai. In conclusion, cloud computing is a technology that fundamentally changed the way we think about applications in that they are rented and paid for as a service. The idea is the same as for utilities—water, gas, cable, and phone. The provider of the utility builds and is running the infrastructure; you plug in and sign up for a type of service. Sometimes you pay as you go (water, gas), or you sign up for a level of service (phone, cable).2.2.5 Peer-to-Peer Architectures Peer-to-peer (P2P) architectures are very old, but their modern design became popular in the early 2000s with the rise of P2P file sharing applications (e.g., Napster). With a P2P architecture, all computers act as both a client and a server. Therefore, all computers perform all four functions: presentation logic, application logic, data access logic, and data storage (see Figure 2-9). With a P2P file sharing application, a user uses the presentation, application, and data access logic installed on his or her computer to access the data stored on another computer in the network. With a P2P application sharing network (e.g., grid computing such as seti.org), other users in the network can use others’ computers to access application logic as well. The advantage of P2P networks is that the data can be installed anywhere on the network. They spread the storage throughout the network, even globally, so they can be very resilient to the failure of any one computer. The challenge is finding the data. There must be some central server that enables you to find the data you need, so P2P architectures FIGURE 2-9 Peer-to-peer architectureClient (personal computer)Presentation logic Application logic Data access logic Data storageClient (personal computer)Presentation logic Application logic Data access logic Data storageTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 35Application Architectures 35often are combined with a client-server architecture. Security is a major concern in most P2P networks, so P2P architectures are not commonly used in organizations, except for specialized computing needs (e.g., grid computing).2.2.6 Choosing Architectures Each of the preceding architectures has certain costs and benefits, so how do you choose the “right” architecture? In many cases, the architecture is simply a given; the organization has a certain architecture, and one simply has to use it. In other cases, the organization is acquiring new equipment and writing new software and has the opportunity to develop a new architecture, at least in some part of the organization. Almost all new applications today are client-server applications. Client-server architectures provide the best scalability, the ability to increase (or decrease) the capacity of the servers to meet changing needs. For example, we can easily add or remove application servers or database servers depending on whether we need more or less capacity for application software or database software and storage. Client-server architectures are also the most reliable. We can use multiple servers to perform the same tasks, so that if one server fails, the remaining servers continue to operate and users don’t notice problems. Finally, client-server architectures are usually the cheapest because many tools exist to develop them. And lots of client-server software exists for specific parts of applications so we can more quickly buy parts of the application we need. For example, no one writes Shopping Carts anymore; it’s cheaper to buy a Shopping Carts software application and put it on an application server than it is to write your own. Client-server architectures also enable cloud computing. As we mentioned in Section 2.2.4, companies may choose to run a software as a service (SaaS) because of low price and high scalability as compared to traditional client-server architecture hosted at home. One major issue that companies face when choosing SaaS is the security of the data. Each company has to evaluate the risk of its data being compromised and select its cloud provider carefully. However, SaaS is gaining popularity and companies are becoming more and more accustomed to this solution. MANAGEMENT2-1 Cloud Computing with Salesforce.comFOCUSSalesforce.com, the world’s number one cloud platform, is the poster child for cloud computing. Companies used to buy and install software for customer relationship management (CRM), the process of identifying potential customers, marketing to them, converting them into customers, and managing the relationship to retain them. The software and needed servers were expensive and took a long time to acquire and install. Typically, only large firms could afford it. Salesforce.com changed this by offering a cloud computing solution. The CRM software offered by salesforce.com resides on the salesforce.com servers. There is no need to buy and install new hardware or software. Companies just pay a monthly fee to access thesoftware over the Internet. Companies can be up and running in weeks, not months, and it is easy to scale from a small implementation to a very large one. Because salesforce.com can spread its costs over so many users, they can offer deals to small companies that normally wouldn’t be able to afford to buy and install their own software. Salesforce is a very competitive organization that is keeping up with the mobile world too. In fall 2013, it announced the “Salesforce $1 Million Hackathon,” where hundreds of teams competed to build the next killer mobile app on the Salesforce platform. Yup, the winning team will walk away with $1 million! Although we don’t know the winner of this largest single hackathon, the reader can discover this easily by googling it.Trimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 3636 Chapter 2 Application Layer2.3 WORLD WIDE WEB The Web was first conceived in 1989 by Sir Tim Berners-Lee at the European Particle Physics Laboratory (CERN) in Geneva. His original idea was to develop a database of information on physics research, but he found it difficult to fit the information into a traditional database. Instead, he decided to use a hypertext network of information. With hypertext, any document can contain a link to any other document. CERN’s first Web browser was created in 1990, but it was 1991 before it was available on the Internet for other organizations to use. By the end of 1992, several browsers had been created for UNIX computers by CERN and several other European and American universities, and there were about 30 Web servers in the entire world. In 1993, Marc Andreessen, a student at the University of Illinois, led a team of students that wrote Mosaic, the first graphical Web browser, as part of a project for the university’s National Center for Supercomputing Applications (NCSA). By the end of 1993, the Mosaic browser was available for UNIX, Windows, and Macintosh computers, and there were about 200 Web servers in the world. Today, no one knows for sure how many Web servers there are. There are more than 250 million separate Web sites, but many of these are hosted on the same servers by large hosting companies such as godaddy.com or Google sites.2.3.1 How the Web Works The Web is a good example of a two-tier client-server architecture (Figure 2-10). Each client computer needs an application layer software package called a Web browser. There are many different browsers, such as Microsoft’s Internet Explorer. Each server on the network that will act as a Web server needs an application layer software package called a Web server. There are many different Web servers, such as those produced by Microsoft and Apache. To get a page from the Web, the user must type the Internet uniform resource locator (URL) for the page he or she wants (e.g., www.yahoo.com) or click on a link that provides the URL. The URL specifies the Internet address of the Web server and the directory and name of the specific page wanted. If no directory and page are specified, the Web server will provide whatever page has been defined as the site’s home page. For the requests from the Web browser to be understood by the Web server, they must use the same standard protocol or language. If there were no standard and each Web browser used a different protocol to request pages, then it would be impossible for a Microsoft Web browser to communicate with an Apache Web server, for example. The standard protocol for communication between a Web browser and a Web server is Hypertext Transfer Protocol (HTTP). To get a page from a Web server, the Web browser issues a special packet called an HTTP request that contains the URL and other information about the Web page requested (see Figure 2-10). Once the server receives the HTTP request,FIGURE 2-10 How the Web worksServer computer with Web Server software HTTP Request Internet HTTP Response Client computer with Web Browser softwareTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 37World Wide Web 37it processes it and sends back an HTTP response, which will be the requested page or an error message (see Figure 2-10). This request-response dialogue occurs for every file transferred between the client and the server. For example, suppose the client requests a Web page that has two graphic images. Graphics are stored in separate files from the Web page itself using a different file format than the HTML used for the Web page (in JPEG [Joint Photographic Experts Group] format, for example). In this case, there would be three request-response pairs. First, the browser would issue a request for the Web page, and the server would send the response. Then, the browser would begin displaying the Web page and notice the two graphic files. The browser would then send a request for the first graphic and a request for the second graphic, and the server would reply with two separate HTTP responses, one for each request.2.3.2 Inside an HTTP Request The HTTP request and HTTP response are examples of the packets we introduced in Chapter 1 that are produced by the application layer and sent down to the transport, network, data link, and physical layers for transmission through the network. The HTTP response and HTTP request are simple text files that take the information provided by the application (e.g., the URL to get) and format it in a structured way so that the receiver of the message can clearly understand it. An HTTP request from a Web browser to a Web server has three parts. The first two parts are required; the last is optional. The parts are ◾ The request line, which starts with a command (e.g., get), provides the Web page, and ends with the HTTP version number that the browser understands; the version number ensures that the Web server does not attempt to use a more advanced or newer version of the HTTP standard that the browser does not understand. ◾ The request header, which contains a variety of optional information such as the Web browser being used (e.g., Internet Explorer) and the date. ◾ The request body, which contains information sent to the server, such as information that the user has typed into a form. Figure 2-11 shows an example of an HTTP request for a page on our Web server, formatted using version 1.1 of the HTTP standard. This request has only the request line andFIGURE 2-11 An example of a request from a Web browser to a Web server using the HTTP (Hypertext Transfer Protocol) standardRequest line03 Jan 2011Request headerTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 3838 Chapter 2 Application Layer the request header, because no request body is needed for this request. This request includes the date and time of the request (expressed in Greenwich Mean Time [GMT], the time zone that runs through London) and name of the browser used (Mozilla is the code name for the browser). The “Referrer” field means that the user obtained the URL for this Web page by clicking on a link on another page, which in this case is a list of faculty at Indiana University (i.e., www.indiana.edu/∼isdept/faculty.htm). If the referrer field is blank, then it means the user typed the URL himself or herself. You can see inside HTTP headers yourself at www.rexswain.com/httpview.html.2.3.3 Inside an HTTP Response The format of an HTTP response from the server to the browser is very similar to the HTTP request. It, too, has three parts, with the first required and the last two optional: ◾ The response status, which contains the HTTP version number the server has used, a status code (e.g., 200 means “OK”; 404 means “not found”), and a reason phrase (a text description of the status code). ◾ The response header, which contains a variety of optional information, such as the Web server being used (e.g., Apache), the date, and the exact URL of the page in the response. ◾ The response body, which is the Web page itself. Figure 2-12 shows an example of a response from our Web server to the request in Figure 2-11. This example has all three parts. The response status reports “OK,” which means the requested URL was found and is included in the response body. The response header provides the date, the type of Web server software used, the actual URL included in the response body, and the type of file. In most cases, the actual URL and the requested URL are the same, but not always. For example, if you request an URL but do not specify a file name (e.g., www.indiana.edu), you will receive whatever file is defined as the home page for that server, so the actual URL will be different from the requested URL. MANAGEMENT2-2 Top Players in Cloud EmailFOCUSAmong the wide variety of applications that organizations are using, email is most frequently deployed as SaaS. Four major industry players provide email as SaaS: Google, Microsoft, USA.NET, and Intermedia. Although cloud-based email seems to appeal more to smaller companies, it provides a cost-effective solution for organizations with up to 15,000 users (as a rule of thumb). Google was the first company to enter this market and offered Google Apps, Calendar, and 30 Gb of storage in addition to email. Microsoft entered this market in 2008 and offered Microsoft Office 365. Microsoft offers not only email but the whole MS Office suite. And of course, all the office applications are accessible from multiple devices. USA.NET is a SaaS company that offers Microsoft Exchange and robust security features that meet the federal and industry regulations, such as FINRA andHIPAA. It services approximately 6,000 organizations worldwide that provide financial, health care, energy, and critical infrastructure services. In addition, USA.NET offers Security-as-a-Service platform from the cloud. Finally, Intermedia, which was founded in 1995, is the largest Microsoft-hosted Exchange provider. This was the first company to offer Hosted Microsoft Exchange, and today, it has 90,000 customers and more than 700,000 users. Just like Microsoft, Intermedia delivers the Office Suite in the cloud. The prices for the services these companies offer differ quite a bit. The cheapest of these four companies is Google, starting at $4.17 per user per month. However, these are basic prices that increase with the number of features and services added.Trimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 39Electronic Mail 39FIGURE 2-12 An example of a response from a Web server to a Web browser using the HTTP standardResponse statusResponse header03 Jan 2011Response bodyThe response body in this example shows a Web page in Hypertext Markup Language (HTML). The response body can be in any format, such as text, Microsoft Word, Adobe PDF, or a host of other formats, but the most commonly used format is HTML. HTML was developed by CERN at the same time as the first Web browser and has evolved rapidly ever since. HTML is covered by standards produced by the IETF, but Microsoft keeps making new additions to HTML with every release of its browser, so the HTML standard keeps changing.2.4 ELECTRONIC MAIL Electronic mail (or email) was one of the earliest applications on the Internet and is still among the most heavily used today. With email, users create and send messages to one user, several users, or all users on a distribution list. Most email software enables users to send text messages and attach files from word processors, spreadsheets, graphics programs, and so on. Many email packages also permit you to filter or organize messages by priority. Several standards have been developed to ensure compatibility between different email software packages. Any software package that conforms to a certain standard canTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 4040 Chapter 2 Application Layer send messages that are formatted using its rules. Any other package that understands that particular standard can then relay the message to its correct destination; however, if an email package receives a mail message in a different format, it may be unable to process it correctly. Many email packages send using one standard but can understand messages sent in several different standards. The most commonly used standard is SMTP (Simple Mail Transfer Protocol). Other common standards are X.400 and CMC (Common Messaging Calls). In this book, we will discuss only SMTP, but CMC and X.400 both work essentially the same way. SMTP, X.400, and CMC are different from one another (in the same way that English differs from French or Spanish), but several software packages are available that translate between them, so that companies that use one standard (e.g., CMC) can translate messages they receive that use a different standard (e.g., SMTP) into their usual standard as they first enter the company and then treat them as “normal” email messages after that.2.4.1 How Email Works The Simple Mail Transfer Protocol (SMTP) is the most commonly used email standard simply because it is the email standard used on the Internet. Email works similarly to how the Web works, but it is a bit more complex. SMTP email is usually implemented as a two-tier thick client-server application, but not always. We first explain how the normal two-tier thick client architecture works and then quickly contrast that with two alternate architectures. Two-Tier Email Architecture With a two-tier thick client-server architecture, each client computer runs an application layer software package called a mail user agent, which is usually more commonly called an email client (Figure 2-12). There are many common email client software packages such as Eudora and Outlook. The user creates the email message using one of these email clients, which formats the message into an SMTP packet that includes information such as the sender’s address and the destination address. The user agent then sends the SMTP packet to a mail server that runs a special application layer software package called a mail transfer agent, which is more commonly called mail server software (see Figure 2-13). This email server reads the SMTP packet to find the destination address and then sends the packet on its way through the network—often over the Internet—from mail server to mail server, until it reaches the mail server specified in the destination address (see Figure 2-13). The mail transfer agent on the destination server then stores the message in the receiver’s mailbox on that server. The message sits in the mailbox assigned to the user who is to receive the message until he or she checks for new mail. The SMTP standard covers message transmission between mail servers (i.e., mail server to mail server) and between the originating email client and its mail server. A different standard is used to communicate between the receiver’s email client and his or her mail server. Two commonly used standards for communication between email client and mail server are Post Office Protocol (POP) and Internet Message Access Protocol (IMAP). Although there are several important technical differences between POP and IMAP, the most noticeable difference is that before a user can read a mail message with a POP (version 3) email client, the email message must be copied to the client computer’s hard disk and deleted from the mail server. With IMAP, email messages can remain stored on the mail server after they are read. IMAP therefore offers considerable benefits to users who read their email from many different computers (e.g., home, office, computer labs) because they no longer need to worry about having old email messages scattered across several client computers; all email is stored on the server until it is deleted. In our example in Figure 2-13, when the receiver next accesses his or her email, the email client on his or her computer contacts the mail server by sending an IMAP or a POP packet that asks for the contents of the user’s mailbox. In Figure 2-13, we show this as anTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 41Electronic Mail 41FIGURE 2-13 How SMTP (Simple Mail Transfer Protocol) email works. IMAP = Internet Message Access Protocol; LAN = local area networkServer computer with e-mail server software SMTP packet LANSMTP packet(mail transfer agent)Client computer with e-mail client software (mail user group)InternetSMTP packet IMAP or POP packetServer computer with e-mail server softwareLAN IMAP or POP packet(mail transfer agent)Client computer with e-mail client software (mail user group)IMAP packet, but it could just as easily be a POP packet. When the mail server receives the IMAP or POP request, it converts the original SMTP packet created by the message sender into a POP or an IMAP packet that is sent to the client computer, which the user reads with the email client. Therefore, any email client using POP or IMAP must also understand SMTP to create messages. POP and IMAP provide a host of functions that enable the user to manage his or her email, such as creating mail folders, deleting mail, creating address books, and so on. If the user sends a POP or an IMAP request for one of these functions, the mail server will perform the function and send back a POP or an IMAP response packet that is much like an HTTP response packet. Three-Tier Thin Client-Server Architecture The three-tier thin client-server email architecture uses a Web server and Web browser to provide access to your email. With this architecture, you do not need an email client on your client computer. Instead, you use your Web browser. This type of email is sometimes called Web-based email and is provided by a variety of companies such as Hotmail and Yahoo!. You use your browser to connect to a page on a Web server that lets you write the email message by filling in a form. When you click the send button, your Web browser sends the form information to the Web server inside an HTTP request (Figure 2-14). The Web server runs a program (written in C or Perl, for example) that takes the information from the HTTP request and builds an SMTP packet that contains the email message. Although not important to our example, it also sends an HTTP response back to the client. The Web server then sends the SMTP packet to the mail server, which processes the SMTP packet as though it came from a client computer. The SMTP packet flows through the network in the same manner as before. When it arrives at the destination mail server, it is placed in the receiver’s mailbox. When the receiver wants to check his or her mail, he or she uses a Web browser to send an HTTP request to a Web server (see Figure 2-14). A program on the Web server (in C orTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 4242 Chapter 2 Application Layer FIGURE 2-14 Inside the Web. HTTP = Hypertext Transfer Protocol; IMAP = Internet Message Access Protocol; LAN = local area network; SMTP = Simple Mail Transfer ProtocolServer computer with web server software HTTP request LANClient computer with web browserHTTP responseSMTP packet Server computer with mail server software SMTP packetInternetSMTP packet HTTP requestClient computer with web browserServer computer with mail server softwareLANHTTP responsePOP packet Server computer with web server POP packet softwarePerl, for example) processes the request and sends the appropriate POP request to the mail server. The mail server responds with a POP packet, which a program on the Web server converts into an HTTP response and sends to the client. The client then displays the email message in the Web browser Web-based email.TECHNICAL2-2 SMTP TransmissionFOCUS SMTP (Simple Mail Transfer Protocol) is an older protocol, and transmission using it is rather complicated. If we were going to design it again, we would likely find a simpler transmission method. Conceptually, we think of an SMTP packet as one packet. However, SMTP mail transfer agents transmit each element within the SMTP packet as a separate packet and wait for the receiver to respond with an “OK” before sending the next element.For example, in Figure 2-15, the sending mail transfer agent would send the from address and wait for an OK from the receiver. Then it would send the to address and wait for an OK. Then it would send the date, and so on, with the last item being the entire message sent as one element.A simple comparison of Figures 2-13 and 2-14 will quickly show that the three-tier approach using a Web browser is much more complicated than the normal two-tier approach. So why do it? Well, it is simpler to have just a Web browser on the client computer rather than to require the user to install a special email client on his or her computer and then set up the special email client to connect to the correct mail server using either POP or IMAP. It is simpler for the user to just type the URL of the Web serverTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 43Electronic Mail 43FIGURE 2-15 An example of an email message using the SMTP (Simple Mail Transfer Protocol) standardHeader03 Jan 2011Bodyproviding the mail services into his or her browser and begin using mail. This also means that users can check their email from a public computer anywhere on the Internet. It is also important to note that the sender and receiver do not have to use the same architecture for their email. The sender could use a two-tier client-server architecture, and the receiver, a host-based or three-tier client-server architecture. Because all communication is standardized using SMTP between the different mail servers, how the users interact with their mail servers is unimportant. Each organization can use a different approach. In fact, there is nothing to prevent one organization from using all three architectures simultaneously. At Indiana University, email is usually accessed through an email client (e.g., Microsoft Outlook) but is also accessed over the Web because many users travel internationally and find it easier to borrow a Web browser with Internet access than to borrow an email client and set it up to use the Indiana University mail server.2.4.2 Inside an SMTP Packet SMTP defines how message transfer agents operate and how they format messages sent to other message transfer agents. An SMTP packet has two parts: ◾ The header, which lists source and destination email addresses (possibly in text form [e.g., “Pat Smith”]) as well as the address itself (e.g., [email protected]), date, subject, and so on. ◾ The body, which is the word DATA, followed by the message itself. Figure 2-15 shows a simple email message formatted using SMTP. The header of an SMTP message has a series of fields that provide specific information, such as the sender’s email address, the receiver’s address, date, and so on. The information in quotes on the from and to lines is ignored by SMTP; only the information in the angle brackets is used in email addresses. The message ID field is used to provide a unique identification code so that the message can be tracked. The message body contains the actual text of the message itself.2.4.3 Attachments in Multipurpose Internet Mail Extension As the name suggests, SMTP is a simple standard that permits only the transfer of text messages. It was developed in the early days of computing, when no one had even thought about using email to transfer nontext files such as graphics or word processing documents. Several standards for nontext files have been developed that can operate together with SMTP, such as Multipurpose Internet Mail Extension (MIME), uuencode, and binhex. Each of the standards is different, but all work in the same general way. The MIME software, which exists as part of the email client, takes the nontext file such as a PowerPoint graphic file, and translates each byte in the file into a special code that looks like regular text. This encoded section of “text” is then labeled with a series of special fields understoodTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 4444 Chapter 2 Application Layer by SMTP as identifying a MIME-encoded attachment and specifying information about the attachment (e.g., name of file, type of file). When the receiver’s email client receives the SMTP message with the MIME attachment, it recognizes the MIME “text” and uses its MIME software (that is part of the email client) to translate the file from MIME “text” back into its original format.2.5 OTHER APPLICATIONS There are literally thousands of applications that run on the Internet and on other networks. Most application software that we develop today, whether for sale or for private internal use, runs on a network. We could spend years talking about different network applications and still cover only a small number.A Day in the Life: Network Manager It was a typical day for a network manager. It began with the setup and troubleshooting for a videoconference. Videoconferencing is fairly routine activity but this one was a little different; we were trying to videoconference with a different company who used different standards than we did. We attempted to use our usual Web-based videoconferencing but could not connect. We fell back to videoconferencing over telephone lines, which required bringing in our videoconferencing services group. It took two hours but we finally had the technology working. The next activity was building a Windows database server. This involved installing software, adding a server into our ADS domain, and setting up the user accounts. Once the server was on the network, it was critical to install all the security patches for both the operating system and database server. We receive so many security attacks that it is our policy to install all security patches on the same day that new software or servers are placed on the network or the patches are released. After lunch, the next two hours was spent in a boring policy meeting. These meetings are a necessary evil to ensure that the network is well-managed. It is critical that users understand what the network can and can’t be used for, and our ability to respond to users’ demands. Managing users’ expectations about support and use rules helps ensure high user satisfaction. The rest of the day was spent refining the tool we use to track network utilization. We have a simple intrusion detection system to detect hackers, but we wanted to provide more detailed information on network errors and network utilization to better assist us in network planning. Source: With thanks to Jared BeardFortunately, most network application software works in much the same way as the Web or email. In this section, we will briefly discuss only three commonly used applications: Telnet, instant messaging (IM), and video conferencing.2.5.1 Telnet Telnet enables users to log in to servers (or other clients). It requires an application layer program on the client computer and an application layer program on the server or host computer. Once Telnet makes the connection from the client to the server, you must use the account name and password of an authorized user to log in. Although Telnet was developed in the very early days of the Internet (actually, the very first application that tested the connectivity on ARPANET was Telnet), it is still widelyTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 45Other Applications 45used today. Because it was developed so long ago, Telnet assumes a host-based architecture. Any key strokes that you type using Telnet are sent to the server for processing, and then the server instructs the client what to display on the screen. One of the most frequently used Telnet software packages is PuTTY. PuTTY is open source and can be downloaded for free (and in case you’re wondering, the name does not stand for anything, although TTY is a commonly used abbreviation for “terminal” in UNIX-based systems). The very first Telnet applications posed a great security threat because every key stroke was sent over the network as plain text. PuTTY uses secure shell (SSH) encryption when communicating with the server so that no one can read what is typed. An additional advantage of PuTTY is that it can run on multiple platforms, such as Windows, Mac, or Linux. Today, PuTTY is routinely used by network administrators to log in to servers and routers to make configuration changes. MANAGEMENT2-3 Tagging PeopleFOCUSJoseph Krull has a chip on his shoulder—well, in his shoulder to be specific. Krull is one of a small but growing number of people who have a Radio Frequency Identification (RFID) chip implanted in their bodies. RFID technology has been used to identify pets, so that lost pets can be easily reunited with their owners. Now, the technology is being used for humans. Krull has a blown left pupil from a skiing accident. If he were injured in an accident and unable to communicate, an emergency room doctor might misinterpret his blown pupil as a sign of a major head injury and begin drilling holes to relieve pressure. Now doctors can use the RFID chip to identify Krull and quickly locate his complete medical records on the Internet. Critics say such RFID chips pose huge privacy risks because they enable any firms using RFID to track users such as Krull. Retailers, for example, can track when he enters and leaves their stores.Krull doesn’t care. He believes the advantages of having his complete medical records available to any doctor greatly outweigh the privacy concerns. Tagging people is no longer the novelty it once was; in fact, today it is a U.S. Food and Drug Administration approved procedure. More that 10% of all RFID research projects worldwide involve tagging people. There are even do-it-yourself RFID tagging kits available—not that we would recommend them (www.youtube.com/watch?v =vsk6dJr4wps). Besides the application to health records, RFID is also being used for security applications, even something as simple as door locks. Imagine having an RFID-based door lock that opens automatically when you walk up to it because it recognizes the RFID tag in your body. Adapted from: NetworkWorld, ZDNet, and GizMag.com2.5.2 Instant Messaging One of the fastest growing Internet applications has been instant messaging (IM). With IM, you can exchange real-time typed messages or chat with your friends. Some IM software also enables you to verbally talk with your friends in the same way as you might use the telephone or to use cameras to exchange real-time video in the same way you might use a videoconferencing system. Several types of IM currently exist, including Google Talk and AOL Instant Messenger. Instant messaging works in much the same way as the Web. The client computer needs an IM client software package, which communicates with an IM server software package that runs on a server. When the user connects to the Internet, the IM client software package sends an IM request packet to the IM server informing it that the user is now online. The IM client software package continues to communicate with the IM server to monitor whatTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 4646 Chapter 2 Application Layer FIGURE 2-16 How instant messaging (IM) works. LAN = local area networkIM packet LAN IM packetClient computer with e-mail client softwareServer computer with IM server softwareLANIM packet InternetIM packet LANClient computer with IM client softwareother users have connected to the IM server. When one of your friends connects to the IM server, the IM server sends an IM packet to your client computer so that you now know that your friend is connected to the Internet. The server also sends a packet to your friend’s client computer so that he or she knows that you are on the Internet. With the click of a button, you can both begin chatting. When you type text, your IM client creates an IM packet that is sent to the IM server (Figure 2-16). The server then retransmits the packet to your friend. Several people may be part of the same chat session, in which case the server sends a copy of the packet to all of the client computers. IM also provides a way for different servers to communicate with one another, and for the client computers to communicate directly with each other. Additionaly, IM will do voice and video.2.5.3 Videoconferencing Videoconferencing provides real-time transmission of video and audio signals to enable people in two or more locations to have a meeting. In some cases, videoconferences are held in special-purpose meeting rooms with one or more cameras and several video display monitors to capture and display the video signals (Figure 2-17). Special audio microphones and speakers are used to capture and play audio signals. The audio and video signals are combined into one signal that is transmitted though a MAN or WAN to people at the other location. Most of this type of videoconferencing involves two teams in two separate meeting rooms, but some systems can support conferences of up to eight separate meeting rooms. Some advanced systems provide telepresence, which is of such high quality that you feel you are face-to-face with the other participants. The fastest growing form of videoconferencing is desktop videoconferencing. Small cameras installed on top of each computer permit meetings to take place from individual offices (Figure 2-18). Special application software (e.g., Yahoo! IM, Skype, Net Meeting) is installed on the client computer and transmits the images across a network to applicationTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 47Other Applications 47FIGURE 2-17 A Cisco telepresence system Source: Courtesy Cisco Systems, Inc. Unauthorized use not permittedFIGURE 2-18 Desktop videoconferencing Source: Courtesy Cisco Systems, Inc. Unauthorized use not permittedsoftware on a videoconferencing server. The server then sends the signals to the other client computers that want to participate in the videoconference. In some cases, the clients can communicate with one another without using the server. The cost of desktop videoconferencing ranges from less than $20 per computer for inexpensive systems to more than $1,000 for high-quality systems. Some systems have integrated conferencing software with desktop videoconferencing, enabling participants to communicate verbally and, by using applications such as white boards, to attend the same meeting while they are sitting at the computers in their offices. The transmission of video requires a lot of network capacity. Most videoconferencing uses data compression to reduce the amount of data transmitted. Surprisingly, the most common complaint is not the quality of the video image but the quality of the voice transmissions. Special care needs to be taken in the design and placement of microphones and speakers to ensure quality sound and minimal feedback. Most videoconferencing systems were originally developed by vendors using different formats, so many products were incompatible. The best solution was to ensure that allTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 4848 Chapter 2 Application Layer hardware and software used within an organization was supplied by the same vendor and to hope that any other organizations with whom you wanted to communicate used the same equipment. Today, three standards are in common use: H.320, H.323, and MPEG-2 (also called ISO 13818-2). Each of these standards was developed by different organizations and is supported by different products. They are not compatible, although some application software packages understand more than one standard. H.320 is designed for room-to-room videoconferencing over high-speed telephone lines. H.323 is a family of standards designed for desktop videoconferencing and just simple audio conferencing over the Internet. MPEG-2 is designed for faster connections, such as a LAN or specially designed, privately operated WAN. Webcasting is a special type of one-directional videoconferencing in which content is sent from the server to the user. The developer creates content that is downloaded as needed by the users and played by a plug-in to a Web browser. At present, there are no standards for Webcast technologies, but the products by RealNetworks.com are the de facto standards.2.6 IMPLICATIONS FOR MANAGEMENT The first implication for management from this chapter is that the primary purpose of a network is to provide a worry-free environment in which applications can run. The network itself does not change the way an organization operates; it is the applications that the network enables that have the potential to change organizations. If the network does not easily enable a wide variety of applications, this can severely limit the ability of the organization to compete in its environment. The second implication is that over the past few years there has been a dramatic increase in the number and type of applications that run across networks. In the early 1990s, networks primarily delivered email and organization-specific application traffic (e.g. accounting transactions, database inquiries, inventory data). Today’s traffic contains large amounts of email, Web packets, videoconferencing, telephone calls, instant messaging, music, and organization-specific application traffic. Traffic has been growing much more rapidly than expected, and each type of traffic has different implications for the best network design, making the job of the network manager much more complicated. Most organizations have seen their network operating costs grow significantly even though the cost per packet (i.e., the cost divided by the amount of traffic) has dropped significantly over the last 10 years. Experts predict that by 2015, video will be the most common type of traffic on the Web, passing email and Web, which are the leading traffic types today.MANAGEMENT2-4 Cloud-Hosted Virtual DesktopsFOCUSWhile cloud computing started on the server side, it quickly is moving to the client side—the desktop. Imagine that you work for a multinational organization and fly several times a year to different parts of the world to do your job. Your organization doesn’t want you to travel with a laptop because they fear that you can lose the laptop with the data on it but they want you to be able to log in to any desktop in any office around the world and have your desktop appear on the screen. Well, with thecloud technology, this is possible, and many companies are taking advantage of this new service. Could you guess its name? Yes, Desktop-as-a-Service (DaaS). Several companies offer DaaS without the infrastructure cost and with reduced complexity of deploying desktops. This service works as a monthly subscription service and includes data center hardware and facilities and also security. Dell DaaS on Demand and Amazon WorkSpaces are among the service providers of Daas.Trimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 49Key terms 49SUMMARYApplication Architectures There are four fundamental application architectures. In host-based networks, the server performs virtually all of the work. In client-based networks, the client computer does most of the work; the server is used only for data storage. In client-server networks, the work is shared between the servers and clients. The client performs all presentation logic, the server handles all data storage and data access logic, and one or both perform the application logic. With peer-to-peer networks, client computers also play the role of a server. Client-server networks can be cheaper to install and often better balance the network loads but are more complex to develop and manage. Cloud computing is a form of client-server architecture.World Wide Web One of the fastest growing Internet applications is the Web, which was first developed in 1990. The Web enables the display of rich graphical images, pictures, full-motion video, and sound. The Web is the most common way for businesses to establish a presence on the Internet. The Web has two application software packages: a Web browser on the client and a Web server on the server. Web browsers and servers communicate with one another using a standard called HTTP. Most Web pages are written in HTML, but many also use other formats. The Web contains information on just about every topic under the sun, but finding it and making sure the information is reliable are major problems.Electronic Mail With email, users create and send messages using an application-layer software package on client computers called user agents. The user agent sends the mail to a server running an application-layer software package called a mail transfer agent, which then forwards the message through a series of mail transfer agents to the mail transfer agent on the receiver’s server. Email is faster and cheaper than regular mail and can substitute for telephone conversations in some cases. Several standards have been developed to ensure compatibility between different user agents and mail transfer agents such as SMTP, POP, and IMAP.KEY TERMS application architecture, 27 application logic, 27 client-based architecture, 27 client-server architecture, 27 cloud computing, 35 cloud provider, 28 cloud-based architectures, 27 data access logic, 27 data storage, 27 desktop videoconferencing, 46 distributed computing model, 32 email, 39 H.320, 48 H.323, 48host-based architecture, 27 HTTP request, 36 HTTP response, 37 Hypertext Markup Language (HTML), 39 Hypertext Transfer Protocol (HTTP), 36 Infrastructure as a Service (IaaS), 34 instant messaging (IM), 45 Internet, 26 Internet Message Access Protocol (IMAP), 40 mail transfer agent, 40 mail user agent, 40 middleware, 30 MPEG-2, 48 Multipurpose Internet Mail Extension (MIME), 43multitenancy, 33 n-tier architecture, 30 peer-to-peer architecture, 27 Platform as a Service (Paas), 34 Post Office Protocol (POP), 40 presentation logic, 27 protocol, 36 request body, 37 request header, 37 request line, 37 response body, 38 response header, 38 response status, 38 scalability, 35 server farm, 33Simple Mail Transfer Protocol (SMTP), 40 SMTP body, 43 SMTP header, 43 Software as a Service (SaaS), 33 storage area network (SAN), 33 Telnet, 44 thick client, 31 thin client, 31 three-tier architecture, 30 two-tier architecture, 30 uniform resource locator (URL), 36 Videoconferencing, 46 Web browser, 36 Webcasting, 48 Web server, 36Trimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 5050 Chapter 2 Application LayerQUESTIONS 1. What are the different types of application architectures? 2. Describe the four basic functions of an application software package. 3. What are the advantages and disadvantages of host-based networks versus client-server networks? 4. What is middleware, and what does it do? 5. Suppose your organization was contemplating switching from a host-based architecture to clientserver. What problems would you foresee? 6. Which is less expensive: host-based networks or client-server networks? Explain. 7. Compare and contrast two-tier, three-tier, and n-tier client-server architectures. What are the technical differences, and what advantages and disadvantages does each offer? 8. How does a thin client differ from a thick client? 9. What are the benefits of cloud computing? 10. Compare and contrast the three cloud computing models. 11. What is a network computer? 12. For what is HTTP used? What are its major parts? 13. For what is HTML used?14. Describe how a Web browser and Web server work together to send a Web page to a user. 15. Can a mail sender use a two-tier architecture to send mail to a receiver using a three-tier architecture? Explain. 16. Describe how mail user agents and mail transfer agents work together to transfer mail messages. 17. What roles do SMTP, POP, and IMAP play in sending and receiving email on the Internet? 18. What are the major parts of an email message? 19. What is a virtual server? 20. What is Telnet, and why is it useful? 21. What is cloud computing? 22. Explain how instant messaging works. 23. Compare and contrast the application architecture for videoconferencing and the architecture for email. 24. Which of the common application architectures for email (two-tier client server, Web-based) is “best”? Explain. 25. Some experts argue that thin-client client-server architectures are really host-based architectures in disguise and suffer from the same old problems. Do you agree? Explain.EXERCISES A. Investigate the use of the major architectures by a local organization (e.g., your university). Which architecture(s) does it use most often and what does it see itself doing in the future? Why? B. What are the costs of thin client versus thick client architectures? Search the Web for at least two different studies and be sure to report your sources. What are the likely reasons for the differences between the two?C. Investigate which companies are the most reliable cloud computing providers for small business. D. What application architecture does your university use for email? Explain. E. Investigate the options for having your private cloud as an individual. Hint: Try the Apple Web site.MINICASES I. Deals-R-Us Brokers (Part 1) Fred Jones, a distant relative of yours and president of Deals-R-Us Brokers (DRUB), has come to you for advice. DRUB is a small brokerage house that enables its clients to buy and sell stocks over the Internet, as well as place traditional orders by phone or fax. DRUB has just decided to offer a set of stock analysis tools that will help itsclients more easily pick winning stocks, or so Fred tells you. Fred’s information systems department has presented him with two alternatives for developing the new tools. The first alternative will have a special tool developed in C++ that clients will download onto their computers to run. The tool will communicate with the DRUB server to select data to analyze.Trimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 51Hands-On Activity 2A 51The second alternative will have the C++ program running on the server, the client will use his or her browser to interact with the server. a. Classify the two alternatives in terms of what type of application architecture they use. b. Outline the pros and cons of the two alternatives and make a recommendation to Fred about which is better. II. Deals-R-Us Brokers (Part 2) Fred Jones, a distant relative of yours and president of Deals-R-Us Brokers (DRUB), has come to you for advice. DRUB is a small brokerage house that enables its clients to buy and sell stocks over the Internet, as well as place traditional orders by phone or fax. DRUB has just decided to install a new email package. The IT department offered Fred two solutions. First, it could host the email in-house using Microsoft Exchange Server. The second solution would be to use one of the cloud-based providers and completely outsource the company email. The IT department also explained to Fred that both solutions would allow users to access email on their desktops and laptops and also on their smart devices. a. Briefly explain to Fred, in layperson’s terms, the differences between the two. b. Outline the pros and cons of the two alternatives and make a recommendation to Fred about which is better. III. Accurate Accounting Diego Lopez is the managing partner of Accurate Accounting, a small accounting firm that operates a dozen offices in California. Accurate Accounting provides audit and consulting services to a growing number of small- and medium-sized firms, many of which are high technology firms. Accurate Accounting staff typicallyspend many days on-site with clients during their consulting and audit projects, but has increasingly been using email and instant messenger (IM) to work with clients. Now, many firms are pushing Accurate Accounting to adopt videoconferencing. Diego is concerned about what videoconferencing software and hardware to install. While Accurate Accounting’s email system enables it to exchange email with any client, using IM has proved difficult because Accurate Accounting has had to use one IM software package with some companies and different IM software with others. Diego is concerned that videoconferencing may prove to be as difficult to manage as IM. “Why can’t IM work as simply as email?” he asks. “Will my new videoconferencing software and hardware work as simply as email, or will it be IM all over again?” Prepare a response to his questions. IV. Ling Galleries Howard Ling is a famous artist with two galleries in Hawaii. Many of his paintings and prints are sold to tourists who visit Hawaii from Hong Kong and Japan. He paints 6–10 new paintings a year, which sell for $50,000 each. The real money comes from the sales of prints; a popular painting will sell 1,000 prints at a retail price of $1,500 each. Some prints sell very quickly, while others do not. As an artist, Howard paints what he wants to paint. As a businessman, Howard also wants to create art that sells well. Howard visits each gallery once a month to talk with clients, but enjoys talking with the gallery staff on a weekly basis to learn what visitors say about his work and to get ideas for future work. Howard has decided to open two new galleries, one in Hong Kong and one in Tokyo. How can the Internet help Howard with the two new galleries?CASE STUDY NEXT-DAY AIR SERVICE See the book companion site at www.wiley.com/college/fitzgerald.HANDS-ON ACTIVITY 2A Looking Inside Your HTTP Packets Figures 2-11 and 2-12 show you inside one HTTP request and one HTTP response that we captured. The objective ofthis Activity is for you to see inside HTTP packets that you create.Trimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 5252 Chapter 2 Application Layer 1. Use your browser to connect to www.rexswain.com/ httpview.html. You will see the screen in Figure 2-19.the time we did this, Indiana University was using the Apache Web server.2. In box labeled URL, type any URL you like and click Submit. You will then see something like the screen in Figure 2-20. In the middle of the screen, under the label “Sending Request:” you will see the exact HTTP packet that your browser generated.4. Try this on several sites around the Web to see what Web server they use. For example, Microsoft uses the Microsoft IIS Web server, while Cisco uses Apache. Some companies set their Web servers not to release this information.3. If you scroll this screen down, you’ll see the exact HTTP response packet that the server sent back to you. In Figure 2-21, you’ll see the response from the Indiana University Web server. You’ll notice that atFIGURE 2-19The HTTP ViewerDeliverables Do a print screen from two separate Web sites that shows your HTTP requests and the servers’ HTTP responses.Trimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 53Hands-On Activity 2B 53FIGURE 2-20Looking inside an HTTP requestHANDS-ON ACTIVITY 2B Tracing Your Email Most email today is spam, unwanted commercial email, or phishing, fake email designed to separate you from your money. Criminals routinely send fake emails that try to get you to tell them your log-in information for your bank or your PayPal account, so they can steal the information, log-in as you, and steal your money.It is very easy to fake a return address on an email, so simply looking to make sure that an email has a valid sender is not sufficient to ensure that the email was actually sent by the person or company that claims to have sent it. However, every SMTP email packet contains information in its header about who actually sent the email.Trimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 5454 Chapter 2 Application LayerFIGURE 2-21Looking inside an HTTP responseYou can read this information yourself, or you can use a tool designed to simplify the process for you. The objective of this Activity is for you to trace an email you have received to see if the sending address on the email is actually the organization that sent it. There are many tools you can use to trace your email. We like a tool called eMail Tracker Pro, which has a free version that lasts 15 days. 1. Go to www.emailtrackerpro.com and download and install eMail Tracker Pro.2. Login to your email and find an email message you want to trace. I recently received an email supposedly from Wachovia Bank; the sender’s email address was [email protected] 3. After you open the email, find the option that enables you to view the Internet header or source of the message (in Microsoft Outlook, click the Options tab and look at the bottom of the box that pops up). Figure 2-22 shows the email I received and how to find the SMTP header (which OutlookTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 55Hands-On Activity 2B 55Internet headersFIGURE 2-22Viewing the SMTP packet headerTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 5656 Chapter 2 Application Layer calls the Internet header). Copy the entire SMTP header to the clipboard. 4. Start eMail Tracker Pro. Select Trace an email, and paste the SMTP header into the box provided. Click Trace to start the trace. 5. It may take up to 30 seconds to trace the email, so be patient. Figure 2-23 shows the results from the email I received. The email supposedly from Wachovia Bank was actually from a company named Musser and Kouri Law whose primaryFIGURE 2-23Viewing the source of the SMTP packetSource: http://www.visualware.com/contact.htmlcontact is Musser Ratliff, CPA, which uses SBC in Plano, Texas, as its Internet service provider. We suspect that someone broke into this company’s network and used their email server without permission, or fraudulently used this company’s name and contact information on its domain registration. Deliverables Trace one email. Print the original email message and the trace results.Trimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 57Hands-On Activity 2C 57HANDS-ON ACTIVITY 2C Seeing SMTP and POP PDUs We’ve discussed about how messages are transferred using layers and the different protocol data units (PDUs) used at each layer. The objective of this Activity is forFIGURE 2-24SMTP packets in Wiresharkyou to see the different PDUs in the messages that you send. To do this, we’ll use Wireshark, which is one of the world’s foremost network protocol analyzers, and is theTrimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 5858 Chapter 2 Application Layer de facto standard that most professional and education institutions use today. It is used for network troubleshooting, network analysis, software and communications protocol development, and general education about how networks work. Wireshark enables you to see all messages sent by your computer and may also let you see the messages sent by other users on your LAN (depending on how your LAN is configured). For this activity you can capture your own SMTP and POP packets using Wireshark, or use two files that we’ve created by capturing SMTP and POP packets. We’ll assume you’re going to use our files. If you’d like to capture your own packets, read Hands-On Activity 1B in Chapter 1 and use your two-tier email client to create and send an email message instead of your Web browser. If you’d like to use our files, go to the Web site for this book and download the two files: SMTP Capture.pkt and POP3 Capture.pkt. Part 1: SMTP 1. Start Wireshark and either capture your SMTP packets or open the file called SMTP Capture.pkt. 2. We used the email software on our client computer to send an email message to our email server. Figure 2-24 shows the packets we captured that were sent to and from the client computer (called 192.168.1.100) and the server (128.196.40.4) to send this message from the client to the server. The first few packets are called the handshake, as the client connects to the server and the server acknowledges it is ready to receive a new email message. 3. Packet 8 is the start of the email message that identifies the sender. The next packet from the client (packet 10) provides the recipient address and then the email message starts with the DATA command (packet 12) and is spread over several packets (14, 15, and 17) because it is too large to fit in one Ethernet frame. (Remember that the sender’s transport layer breaks up large messages into several smaller TCP segments for transmission and the receiver’s transport layer reassembles the segments back into the one SMTP message.) 4. Packet 14 contains the first part of the message that the user wrote. It’s not that easy to read, but by looking in the bottom window, you can see what the sender wrote. Deliverables 1. List the information in the SMTP header (to, from, date, subject, message ID#).2. Look through the packets to read the user’s message. List the user’s actual name (not her email address), her birth date, and her SSN. 3. Some experts believe that sending an email message is like sending a postcard. Why? How secure is SMTP email? How could security be improved? Part 2: POP 1. Start Wireshark and either capture your SMTP packets or open the file called POP3 Capture.pkt. (Note: Depending on the version of Wireshark you are using, the file extension may by pkt or pcap.) 2. We used the email software on our client computer to read an email message that was our email server. Figure 2-25 shows the packets we captured that were sent to and from the client computer (called 128.196.239.91) and the server (128.192.40.4) to send an email message from the server to the client. The first few packets are called the handshake, as the client logs in to the server and the server accepts the log in. 3. Packet 12 is the POP STAT command (status) that asks the server to show the number of email messages in the user’s mailbox. The server responds in packet 13 and tells the client there is one message. 4. Packet 16 is the POP LIST command that asks the server to send the client a summary of email messages, which it does in packet 17. 5. Packet 18 is the POP RETR command (retrieve) that asks the server to send message 1 to the client. Packets 20, 22, and 23 contain the email message. It’s not that easy to read, but by looking in the bottom window for packet 20, you can see what the sender wrote. You can also expand the POP packet in the middle packet detail window (by clicking on the + box in front of it), which is easier to read. Deliverables 1. Packets 5 through 11 are the log-in process. Can you read the user id and passwords? Why or why not? 2. Look through the packets to read the user’s message. List the user’s actual name (not her email address), her birth date, and her SSN.Trimsize Trim Size: 8in x 10inFitzergald c02.tex V2 - July 25, 2014 10:05 A.M. Page 59Hands-On Activity 2C 59FIGURE 2-25POP packets in WiresharkTrimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 60CHAPTER 3 PHYSICAL LAYER The physical layer (also called layer 1) is the physical connection between the computers and/or devices in the network. This chapter examines how the physical layer operates. It describes the most commonly used media for network circuits and explains the basic technical concepts of how data are actually transmitted through the media. Three different types of transmission are described: digital transmission of digital computer data, analog transmission of digital computer data, and digital transmission of analog voice data. You do not need an engineering-level understanding of the topics to be an effective user and manager of data communication applications. It is important, however, that you understand the basic concepts, so this chapter is somewhat technical.OBJECTIVESOUTLINE◾ ◾ ◾ ◾ ◾ ◾Be familiar with the different types of network circuits and media Understand digital transmission of digital data Understand analog transmission of digital data Understand digital transmission of analog data Be familiar with analog and digital modems Be familiar with multiplexing3.1 Introduction 3.2 Circuits 3.2.1 Circuit Configuration 3.2.2 Data Flow 3.2.3 Multiplexing 3.3 Communication Media 3.3.1 Twisted Pair Cable 3.3.2 Coaxial Cable 3.3.3 Fiber-Optic Cable 3.3.4 Radio 3.3.5 Microwave 3.3.6 Satellite 3.3.7 Media Selection 3.4 Digital Transmission of Digital Data 3.4.1 Coding3.4.2 Transmission Modes 3.4.3 Digital Transmission 3.4.4 How Ethernet Transmits Data 3.5 Analog Transmission of Digital Data 3.5.1 Modulation 3.5.2 Capacity of a Circuit 3.5.3 How Modems Transmit Data 3.6 Digital Transmission of Analog Data 3.6.1 Translating from Analog to Digital 3.6.2 How Telephones Transmit Voice Data 3.6.3 How Instant Messenger Transmits Voice Data 3.6.4 Voice over Internet Protocol (VoIP) 3.7 Implications for Management Summary3.1 INTRODUCTION This chapter examines how the physical layer operates. The physical layer is the network hardware including servers, clients, and circuits, but in this chapter we focus on the circuits and on how clients and servers transmit data through them. The circuits are usually a combination of both physical media (e.g., cables, wireless transmissions) and special-purpose devices that enable the transmissions to travel through the media. Special-purpose devices such as switches and routers are discussed in Chapters 6 and 8. 60Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 61Introduction 61The word circuit has two very different meanings in networking, and sometimes it is hard to understand which meaning is intended. Sometimes, we use the word circuit to refer to the physical circuit—the actual wire—used to connect two devices. In this case, we are referring to the physical media that carry the message we transmit, such as the twisted pair wire used to connect a computer to the LAN in an office. In other cases, we are referring to a logical circuit used to connect two devices, which refers to the transmission characteristics of the connection, such as when we say a company has a T1 connection into the Internet. In this case, T1 refers not to the physical media (i.e., what type of wire is used) but rather to how fast data can be sent through the connection.1 Often, each physical circuit is also a logical circuit, but sometimes it is possible to have one physical circuit—one wire—carry several separate logical circuits, or to have one logical circuit travel over several physical circuits. There are two fundamentally different types of data that can flow through the circuit: digital and analog. Computers produce digital data that are binary, either on or off, 0 or 1. In contrast, telephones produce analog data whose electrical signals are shaped like the sound waves they transfer; they can take on any value in a wide range of possibilities, not just 0 or 1. Data can be transmitted through a circuit in the same form they are produced. Most computers, for example, transmit their digital data through digital circuits to printers and other attached devices. Likewise, analog voice data can be transmitted through telephone networks in analog form. In general, networks designed primarily to transmit digital computer data tend to use digital transmission, and networks designed primarily to transmit analog voice data tend to use analog transmission (at least for some parts of the transmission). Data can be converted from one form into the other for transmission over network circuits. For example, digital computer data can be transmitted over an analog telephone circuit by using a modem. A modem at the sender’s computer translates the computer’s digital data into analog data that can be transmitted through the voice communication circuits, and a second modem at the receiver’s end translates the analog transmission back into digital data for use by the receiver’s computer. Likewise, it is possible to translate analog voice data into digital form for transmission over digital computer circuits using a device called a codec. Once again, there are two codecs, one at the sender’s end and one at the receiver’s end. Why bother to translate voice into digital? The answer is that digital transmission is “better” than analog transmission. Specifically, digital transmission offers five key benefits over analog transmission: ◾ Digital transmission produces fewer errors than analog transmission. Because the transmitted data are binary (only two distinct values), it is easier to detect and correct errors. ◾ Digital transmission permits higher maximum transmission rates. Fiber-optic cable, for example, is designed for digital transmission. ◾ Digital transmission is more efficient. It is possible to send more data through a given circuit using digital rather than analog transmission. ◾ Digital transmission is more secure because it is easier to encrypt. ◾ Finally, and most importantly, integrating voice, video, and data on the same circuit is far simpler with digital transmission. For these reasons, most long-distance telephone circuits built by the telephone companies and other common carriers over the past decades use digital transmission. In the future, most transmissions (voice, data, and video) will be sent digitally. 1 Don’t worry about what a T1 circuit is at this point. All you need to understand is that a T1 circuit is a specific typeof circuit with certain characteristics, the same way we might describe gasoline as being unleaded or premium. We discuss T1 circuits in Chapter 9.Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 6262 Chapter 3 Physical Layer Regardless of whether digital or analog transmission is used, transmission requires the sender and receiver to agree on two key parameters. First, they have to agree on the symbols that will be used: What pattern of electricity, light, or radio wave will be used to represent a 0 and a 1. Once these symbols are set, the sender and receiver have to agree on the symbol rate: How many symbols will be sent over the circuit per second? Analog and digital transmissions are different, but both require a commonly agreed on set of symbols and a symbol rate. In this chapter, we first describe the basic types of circuits and examine the different media used to build circuits. Then we explain how data are actually sent through these media using digital and analog transmission.3.2 CIRCUITS 3.2.1 Circuit Configuration Circuit configuration is the basic physical layout of the circuit. There are two fundamental circuit configurations: point-to-point and multipoint. In practice, most complex computer networks have many circuits, some of which are point-to-point and some of which are multipoint. Figure 3-1 illustrates a point-to-point circuit, which is so named because it goes from one point to another (e.g., one computer to another computer). These circuits sometimes are called dedicated circuits because they are dedicated to the use of these two computers. This type of configuration is used when the computers generate enough data to fill the capacity of the communication circuit. When an organization builds a network using point-to-point circuits, each computer has its own circuit running from itself to the other computers. This can get very expensive, particularly if there is some distance between the computers. Despite the cost, point-to-point circuits are used regularly in modern wired networks to connect clients to switches, switches to switches and routers, and routers to routers. We will discuss in detail these circuits in Chapter 7. Figure 3-2 shows a multipoint circuit (also called a shared circuit). In this configuration, many computers are connected on the same circuit. This means that each must share the circuit with the others. The disadvantage is that only one computer can use the circuit at FIGURE 3-1 Point-to-point circuitCircuitClient computerServerFIGURE 3-2 Multipoint circuitServerClient computerClient computerClient computer Client computerTrimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 63Circuits 63a time. When one computer is sending or receiving data, all others must wait. The advantage of multipoint circuits is that they reduce the amount of cable required and typically use the available communication circuit more efficiently. Imagine the number of circuits that would be required if the network in Figure 3-2 were designed with separate point-to-point circuits. For this reason, multipoint configurations are cheaper than point-to-point circuits. Thus, multipoint circuits typically are used when each computer does not need to continuously use the entire capacity of the circuit or when building point-to-point circuits is too expensive. Wireless circuits are almost always multipoint circuits because multiple computers use the same radio frequencies and must take turns transmitting.3.2.2 Data Flow Circuits can be designed to permit data to flow in one direction or in both directions. Actually, there are three ways to transmit: simplex, half-duplex, and full-duplex (Figure 3-3). Simplex transmission is one-way transmission, such as that with radios and TVs. Half-duplex transmission is two-way transmission, but you can transmit in only one direction at a time. A half-duplex communication link is similar to a walkie-talkie link; only one computer can transmit at a time. Computers use control signals to negotiate which will send and which will receive data. The amount of time half-duplex communication takes to switch between sending and receiving is called turnaround time (also called retrain time or reclocking time). The turnaround time for a specific circuit can be obtained from its technical specifications (often between 20 and 50 milliseconds). Europeans sometimes use the term simplex circuit to mean a half-duplex circuit. With full-duplex transmission, you can transmit in both directions simultaneously, with no turnaround time. How do you choose which data flow method to use? Obviously, one factor is the application. If data always need to flow only in one direction (e.g., from a remote sensor to a host computer), then simplex is probably the best choice. In most cases, however, data must flow in both directions. The initial temptation is to presume that a full-duplex channel is best; however, each circuit has only so much capacity to carry data. Creating a full-duplex circuit means that the circuit offers full capacity both ways simultaneously. In some cases, it makes more sense to build a set of simplex circuits in the same way a set of one-way streets can increase the speed of traffic. In other cases, a half-duplex circuit may work best. For example, terminalsFIGURE 3-3 Simplex, half-duplex, and full-duplex transmissionsClient computerServer SimplexHalf-duplexFull-duplexTrimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 6464 Chapter 3 Physical Layer connected to mainframes often transmit data to the host, wait for a reply, transmit more data, and so on, in a turn-taking process; usually, traffic does not need to flow in both directions simultaneously. Such a traffic pattern is ideally suited to half-duplex circuits.3.2.3 Multiplexing Multiplexing means to break one high-speed physical communication circuit into several lower-speed logical circuits so that many different devices can simultaneously use it but still “think” that they have their own separate circuits (the multiplexer is “transparent”). It is multiplexing without multiplexing, the Internet would have collapsed in the 1990s. Multiplexing often is done in multiples of 4 (e.g., 8, 16). Figure 3-4 shows a four-level multiplexed circuit. Note that two multiplexers are needed for each circuit: one to combine the four original circuits into the one multiplexed circuit and one to separate them back into the four separate circuits. The primary benefit of multiplexing is to save money by reducing the amount of cable or the number of network circuits that must be installed. For example, if we did not use multiplexers in Figure 3-4, we would need to run four separate circuits from the clients to the server. If the clients were located close to the server, this would be inexpensive. However, if they were located several miles away, the extra costs could be substantial. There are four types of multiplexing: frequency division multiplexing (FDM), time division multiplexing (TDM), statistical time division multiplexing (STDM), and wavelength division multiplexing (WDM). Frequency Division Multiplexing FDM can be described as dividing the circuit “horizontally” so that many signals can travel a single communication circuit simultaneously. The circuit is divided into a series of separate channels, each transmitting on a different frequency, much like a series of different radio or TV stations. All signals exist in the media at the same time, but because they are on different frequencies, they do not interfere with each other. Time Division Multiplexing TDM shares a communication circuit among two or more computers by having them take turns, dividing the circuit vertically, so to speak. Statistical Time Division Multiplexing STDM is the exception to the rule that the capacity of the multiplexed circuit must equal the sum of the circuits it combines. STDM allows more terminals or computers to be connected to a circuit than does FDM or TDM. If you have four computers connected to a multiplexer and each can transmit at 64 Kbps, then you should have a circuit capable of transmitting 256 Kbps (4 × 64 Kbps). However, not all computers will be transmitting continuously at their maximum transmission speed. FIGURE 3-4 Multiplexed circuitFour-level multiplexerCircuitFour-level multiplexerServer Four client computersTrimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 65Circuits 65Users typically pause to read their screens or spend time typing at lower speeds. Therefore, you do not need to provide a speed of 256 Kbps on this multiplexed circuit. If you assume that only two computers will ever transmit at the same time, 128 Kbps will be enough. STDM is called statistical because selection of transmission speed for the multiplexed circuit is based on a statistical analysis of the usage requirements of the circuits to be multiplexed. Wavelength Division Multiplexing WDM is a version of FDM used in fiber-optic cables. When fiber-optic cables were first developed, the devices attached to them were designed to use only one color of light generated by a laser or LED. Light has different frequencies (i.e., colors), so rather than building devices to transmit using only one color, why not send multiple signals, each in a different frequency, through the same fiber-optic cable? By simply attaching different devices that could transmit in the full spectrum of light rather than just one frequency, the capacity of the existing fiber-optic cables could be dramatically increased, with no change to the physical cables themselves. One technology that you may have come across that uses multiplexing is DSL. DSL stands for digital subscriber line, and it allows for simultaneous transmission of voice (phone calls), data going to the Internet (called upstream data), and data coming to your house from the Internet (called downstream data). With DSL, a DSL modem is installed at the customer’s home or office, and another DSL modem is installed at the telephone company switch closet. The modem is first an FDM device that splits the physical circuit into three logical circuits (phone, upstream data, and downstream data). TDM is then used within the two data channels to provide a set of one or more individual channels that can be used to carry different data. A combination of amplitude and phase modulation is used in the data circuits to provide the desired data rate. You will learn more about DSL in Chapter 10. MANAGEMENT3-1 Structured Cabling EIA/TIA 568-BFOCUSIn 1995, the Telecommunications Industry Association (TIA) and Electronic Industries Alliance (EIA) came up with the first standard to create structured cabling, called TIA/EIA 568-A. This standard defined the minimum requirements for internal telecommunications wiring within buildings and between buildings on one campus. This standard was updated and changed many times, and today the accepted standard is TIA/EIA 568-B, which came out in 2002. This standard has six subsystems: 1. Building entrance: the point where external cabling and wireless connects to the internal building wiring and equipment room 2. Equipment room (ER): the room where network servers and telephone equipment would be stored3. Telecommunications closet: the room that contains the cable termination points and the distribution frames 4. Backbone cabling: the cabling that interconnects telecommunication closets, equipment rooms, and building entrances within a building; also, this refers to cabling between buildings 5. Horizontal cabling: the cabling that runs from the telecommunications closet to each LAN 6. Work area: the cabling where the computers, printers, patch cables, jacks, and so on, are located This standard describes what the master cabling document should look like (which would describe each of the six areas discussed previously) and applies for both twisted pair and fiber-optic cabling.Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 6666 Chapter 3 Physical LayerMANAGEMENT3-2 Undersea Fiber-Optic CablesFOCUSPerhaps you were wondering what happens when you send an email from the United States to Europe. How is your email transmitted from one continent to another? It most likely travels through one of the submarine cables that connect America and Europe. A neat interactive submarine cable map can be found at http://www.submarinecablemap.com. This map shows you each cable’s name, readyfor-service (RFS) date, length, owners, Web site (if any), and landing points. Each cable on this map has a capacity of at least 5 Gbps. Actually, the first submarine telecommunication cable was laid in the 1850s and carried telegraphy traffic. Today, we use fiber-optic cable that carries phone, Internet, and private data as digital data. So now you may ask yourself, how do these cables get laid on the seabed? Submarine cables are laid usingspecial cable-layer ships—these are factories that produce the cable on board and then have equipment to lay and bury the cable. The cable-layer ships get as close as possible to the shore where the cable will be connected. A messenger line is sent out from the ship using a work boat that takes it to the shore. Once the cable is secured on shore, the installation process under the sea can begin. A 30 ton sea plow with the cable in it (think about a needle and thread) is then tossed overboard and lands on the seabed. The plow then buries the cable under the sea bed at a required burial depth (up to 3 meters). The simultaneous lay-and-bury of the cable continues until an agreed position, after which the cable is surface laid until reaching its destination.3.3 COMMUNICATION MEDIA The medium (or media, if there is more than one) is the physical matter or substance that carries the voice or data transmission. Many different types of transmission media are currently in use, such as copper (wire), glass or plastic (fiber-optic cable), or air (radio, microwave, or satellite). There are two basic types of media. Guided media are those in which the message flows through a physical medium such as a twisted pair wire, coaxial cable, or fiber-optic cable; the medium “guides” the signal. Wireless media are those in which the message is broadcast through the air, such as microwave or satellite. In many cases, the circuits used in WANs are provided by the various common carriers who sell usage of them to the public. We call the circuits sold by the common carriers communication services. Chapter 9 describes specific services available in North America. The following sections describe the medium and the basic characteristics of each circuit type, in the event you were establishing your own physical network, whereas Chapter 9 describes how the circuits are packaged and marketed for purchase or lease from a common carrier. If your organization has leased a circuit from a common carrier, you are probably less interested in the media used and more interested in whether the speed, cost, and reliability of the circuit meet your needs.3.3.1 Twisted Pair Cable One of the most commonly used types of guided media is twisted pair cable, insulated pairs of wires that can be packed quite close together (Figure 3-5). The wires usually are twisted to minimize the electromagnetic interference between one pair and any other pair in the bundle. Your house or apartment probably has a set of two twisted pair wires (i.e., four wires) from it to the telephone company network. One pair is used to connect your telephone; the other pair is a spare that can be used for a second telephone line. The twistedTrimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 67Communication Media 67FIGURE 3-5 Category 5e twisted pair wire Source: Courtesy of Belkin International, Inc.pair cable used in LANs are usually packaged as four sets of pairs, as shown in Figure 3-5, whereas bundles of several thousand wire pairs are placed under city streets and in large buildings. The specific types of twisted pair cable used in LANs, such as Cat 5e and Cat 6, are discussed in Chapter 7.3.3.2 Coaxial Cable Coaxial cable is a type of guided medium that is quickly disappearing (Figure 3-6). Coaxial cable has a copper core (the inner conductor) with an outer cylindrical shell for insulation. The outer shield, just under the shell, is the second conductor. Because they have additional shielding provided by their multiple layers of material, coaxial cables are less prone to interference and errors than basic low-cost twisted pair wires. Coaxial cables cost about three times as much as twisted pair wires but offer few additional benefits other than better shielding. One can also buy specially shielded twisted pair wire that provides the same level of quality as coaxial cable but at half its cost. For this reason, few companies are installing coaxial cable today, although some still continue to use existing coaxial cable that was installed years ago.3.3.3 Fiber-Optic Cable Although twisted pair is the most common type of guided medium, fiber-optic cable also is becoming widely used. Instead of carrying telecommunication signals in the traditionalFIGURE 3-6 Coaxial cables. Thinnet and Thicknet Ethernet cables (right) - 1. center core, 2. dielectric insulator, 3. metallic shield, 4. plastic jacket and cross-sectional view (left) Source: Courtesy of Tim KloskeOuter cylindrical shell4 2Second conductorInsulator 3 Inner conductor1Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 6868 Chapter 3 Physical Layer JacketAramid yarn Buffer Core CladdingSourceStep index (multimode)Light raysGraded index (multimode)Single modeFIGURE 3-7 Fiber-optic cable Source: © Hugh Threlfall/Alamyelectrical form, this technology uses high-speed streams of light pulses from lasers or LEDs (light-emitting diodes) that carry information inside hair-thin strands of glass called optical fibers. Figure 3-7 shows a fiber-optic cable and depicts the optical core, the cladding (metal coating), and how light rays travel in optical fibers. The earliest fiber-optic systems were multimode, meaning that the light could reflect inside the cable at many different angles. Multimode cables are plagued by excessive signal weakening (attenuation) and dispersion (spreading of the signal so that different parts of the signal arrive at different times at the destination). For these reasons, early multimode fiber was usually limited to about 500 meters. Graded-index multimode fiber attempts to reduce this problem by changing the refractive properties of the glass fiber so that as the light approaches the outer edge of the fiber, it speeds up, which compensates for the slightly longer distance it must travel compared with light in the center of the fiber. Therefore, the light in the center is more likely to arrive at the same time as the light that has traveled at the edges of the fiber. This increases the effective distance to just under 1,000 meters. Single-mode fiber-optic cables transmit a single direct beam of light through a cable that ensures the light reflects in only one pattern, in part because the core diameter has been reduced from 50 microns to about 5 to 10 microns. This smaller-diameter core allows the fiber to send a more concentrated light beam, resulting in faster data transmission speeds and longer distances, often up to 100 kilometers. However, because the light source must be perfectly aligned with the cable, single-mode products usually use lasers (rather than the LEDs used in multimode systems) and therefore are more expensive. Fiber-optic technology is a revolutionary departure from the traditional copper wires of twisted pair cable or coaxial cable. One of the main advantages of fiber optics is that it can carry huge amounts of information at extremely fast data rates. This capacity makes it ideal for the simultaneous transmission of voice, data, and image signals. In most cases, fiber-optic cable works better under harsh environmental conditions than do its metallic counterparts. It is not as fragile or brittle, it is not as heavy or bulky, and it is more resistant to corrosion. Also, in case of fire, an optical fiber can withstand higher temperatures than can copper wire. Even when the outside jacket surrounding the optical fiber has melted, a fiber-optic system still can be used.Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 69Communication Media 693.3.4 Radio One of the most commonly used forms of wireless media is radio; when people used the term wireless, they usually mean radio transmission. When you connect your laptop into the network wirelessly, you are using radio transmission. Radio data transmission uses the same basic principles as standard radio transmission. Each device or computer on the network has a radio receiver/transmitter that uses a specific frequency range that does not interfere with commercial radio stations. The transmitters are very low power, designed to transmit a signal only a short distance, and are often built into portable computers or handheld devices such as phones and personal digital assistants. Wireless technologies for LAN environments, such as IEEE 802.1x, are discussed in more detail in Chapter 7. MANAGEMENT3-3 Boingo Hot Spots Around the WorldFOCUSPerhaps you have come across Boingo while trying to find a wireless connection in an airport between flights. Boingo is a wireless Internet service provider (WISP) that is different than many free wifi connections that you can get at airports or coffee shops because it offers a secure connection (specifically, a VPN or WPA service that can be configured on your device, but more about this in Chapter 11). This secure connection is now offered in 7,000 U.S. locations and 13,000 international locations and as in-flight wifi on some international carriers.Their monthly rates start at $9.94 for laptops and $7.95 for other mobile devices. Boingo also offers 1-, 2-, and 3-hour plans in case you don’t travel frequently and don’t need a monthly subscription. To find Boingo hot spots, you need to download an app on your phone or laptop, and the app will alert you if there is an available wifi connection in your area. The app will even chart a graph that will show you signal strength in real time. Adapted from: Boingo.com, cnet.com3.3.5 Microwave Microwave transmission is an extremely high-frequency radio communication beam that is transmitted over a direct line-of-sight path between any two points. As its name implies, a microwave signal is an extremely short wavelength, thus the word micro-wave. Microwave radio transmissions perform the same functions as cables. For example, point A communicates with point B via a through-the-air microwave transmission path, instead of a copper wire cable. Because microwave signals approach the frequency of visible light waves, they exhibit many of the same characteristics as light waves, such as reflection, focusing, or refraction. As with visible light waves, microwave signals can be focused into narrow, powerful beams that can be projected over long distances. Just as a parabolic reflector focuses a searchlight into a beam, a parabolic reflector also focuses a high-frequency microwave into a narrow beam. Towers are used to elevate the radio antennas to account for the earth’s curvature and maintain a clear line-of-sight path between the two parabolic reflectors; see Figure 3-8. This transmission medium is typically used for long-distance data or voice transmission. It does not require the laying of any cable, because long-distance antennas with microwave repeater stations can be placed approximately 25–50 miles apart. A typical long-distance antenna might be 10 feet wide, although over shorter distances in the inner cities, the dish antennas can be less than 2 feet in diameter. The airwaves in larger cities are becoming congested because so many microwave dish antennas have been installed that they interfere with one another.Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 7070 Chapter 3 Physical Layer FIGURE 3-8 A microwave tower. The round antennas are microwave antennas and the straight antennas are cell phone antennas Source: © Matej Pribelsky / iStockphoto3.3.6 Satellite Satellite transmission is similar to microwave transmission, except instead of transmission involving another nearby microwave dish antenna, it involves a satellite many miles up in space. Figure 3-9 depicts a geosynchronous satellite. Geosynchronous means that the FIGURE 3-9 Satellites in operationSatellite revolving at the same speed as the earth's rotationTrimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 71Communication Media 71satellite remains stationary over one point on the earth. One disadvantage of satellite transmission is the propagation delay that occurs because the signal has to travel out into space and back to earth, a distance of many miles that even at the speed of light can be noticeable. Low earth orbit (LEO) satellites are placed in lower orbits to minimize propogation delay. Satellite transmission is sometimes also affected by raindrop attenuation when satellite transmissions are absorbed by heavy rain. It is not a major problem, but engineers need to work around it.MANAGEMENT3-4 Satellite Communications Improve PerformanceFOCUSBoyle Transportation hauls hazardous materials nationwide for both commercial customers and the government, particularly the U.S. Department of Defense. The Department of Defense recently mandated that hazardous materials contractors use mobile communications systems with up-to-the-minute monitoring when hauling the department’s hazardous cargoes. After looking at the alternatives, Boyle realized that it would have to build its own system. Boyle needed a relational database at its operations center that contained information about customers, pickups, deliveries, truck location, and truck operating status. Data are distributedfrom this database via satellite to an antenna on each truck. Now, at any time, Boyle can notify the designated truck to make a new pickup via the bidirectional satellite link and record the truck’s acknowledgment. Each truck contains a mobile data terminal connected to the satellite network. Each driver uses a keyboard to enter information, which transmits the location of the truck. These satellite data are received by the main offices via a leased line from the satellite earth station. This system increased productivity by an astounding 80% over 2 years; administration costs increased by only 20%.3.3.7 Media Selection Which media are best? It is hard to say, particularly when manufacturers continue to improve various media products. Several factors are important in selecting media. ◾ The type of network is one major consideration. Some media are used only for WANs (microwaves and satellite), whereas others typically are not (twisted pair, coaxial cable, and radio), although we should note that some old WAN networks still use twisted pair cable. Fiber-optic cable is unique in that it can be used for virtually any type of network. ◾ Cost is always a factor in any business decision. Costs are always changing as new technologies are developed and as competition among vendors drives prices down. Among the guided media, twisted pair wire is generally the cheapest, coaxial cable is somewhat more expensive, and fiber-optic cable is the most expensive. The cost of the wireless media is generally driven more by distance than any other factor. For very short distances (several hundred meters), radio is the cheapest; for moderate distances (several hundred miles), microwave is cheapest; and for long distances, satellite is cheapest. ◾ Transmission distance is a related factor. Twisted pair wire coaxial cable and radio can transmit data only a short distance before the signal must be regenerated. Twisted pair wire and radio typically can transmit up to 100–300 meters, and coaxial cable typically between 200 and 500 meters. Fiber optics can transmit up to 75 miles, and new types of fiber-optic cable can reach more than 600 miles.Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 7272 Chapter 3 Physical Layer ◾ Security is primarily determined by whether the media are guided or wireless. Wireless media (radio, microwave, and satellite) are the least secure because their signals are easily intercepted. Guided media (twisted pair, coaxial, and fiber optics) are more secure, with fiber optics being the most secure. ◾ Error rates are also important. Wireless media are most susceptible to interference and thus have the highest error rates. Among the guided media, fiber optics provides the lowest error rates, coaxial cable the next best, and twisted pair cable the worst, although twisted pair cable is generally better than the wireless media. ◾ Transmission speeds vary greatly among the different media. It is difficult to quote specific speeds for different media because transmission speeds are constantly improving and because they vary within the same type of media, depending on the specific type of cable and the vendor. In general, twisted pair cable and coaxial cable can provide data rates of between 1 Mbps (1 million bits per second) and 1 Gbps (1 billion bits per second), whereas fiber-optic cable ranges between 1 Gbps and 40 Gbps. Radio, microwave, and satellite generally provide 10–100 Mbps.3.4 DIGITAL TRANSMISSION OF DIGITAL DATA All computer systems produce binary data. For these data to be understood by both the sender and receiver, both must agree on a standard system for representing the letters, numbers, and symbols that compose messages. The coding scheme is the language that computers use to represent data.3.4.1 Coding A character is a symbol that has a common, constant meaning. A character might be the letter A or B, or it might be a number such as 1 or 2. Characters also may be special symbols such as ? or &. Characters in data communications, as in computer systems, are represented by groups of bits that are binary zeros (0) and ones (1). The groups of bits representing the set of characters that are the “alphabet” of any given system are called a coding scheme, or simply a code. A byte is a group of consecutive bits that is treated as a unit or character. One byte normally is composed of 8 bits and usually represents one character; however, in data communications, some codes use 5, 6, 7, 8, or 9 bits to represent a character. For example, representation of the character A by a group of 8 bits (say, 01 000 001) is an example of coding. There are three predominant coding schemes in use today. United States of America Standard Code for Information Interchange (USASCII, or, more commonly, ASCII) is the most popular code for data communications and is the standard code on most microcomputers. There are two types of ASCII; one is a 7-bit code that has 128 valid character combinations, and the other is an 8-bit code that has 256 combinations. The number of combinations can be determined by taking the number 2 and raising it to the power equal to the number of bits in the code because each bit has two possible values, a 0 or a 1. In this case 27 = 128 characters or 28 = 256 characters. A second commonly used coding scheme is ISO 8859, which is standardized by the International Standards Organization. ISO 8859 is an 8-bit code that includes the ASCII codes plus non-English letters used by many European languages (e.g., letters with accents). If you look closely at Figure 2.21, you will see that HTML often uses ISO 8859. Unicode is the other commonly used coding scheme. There are many different versions of Unicode. UTF-8 is an 8-bit version which is very similar to ASCII. UTF-16, which uses 16 bits per character (i.e., 2 bytes, called a “word”), is used by Windows. By using more bits, UTF-16 can represent many more characters beyond the usual English or Latin characters, such as Cyrillic or Chinese.Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 73Digital Transmission of Digital Data 73FIGURE 3-10 Binary numbers used to represent different characters using ASCIICharacterASCIIA01000001B01000010C01000011D E01000100 01000101a01100001b01100010c01100011d01100100e 1 2 3 4 !01100101 00110001 00110010 00110011 00110100 00100001$00100100We can choose any pattern of bits we like to represent any character we like, as long as all computers understand what each bit pattern represents. Figure 3-10 shows the 8-bit binary bit patterns used to represent a few of the characters we use in ASCII.3.4.2 Transmission Modes Parallel Parallel transmission is the way the internal transfer of binary data takes place inside a computer. If the internal structure of the computer is 8 bit, then all 8 bits of the data element are transferred between main memory and the central processing unit simultaneously on 8 separate connections. The same is true of computers that use a 32-bit structure; all 32 bits are transferred simultaneously on 32 connections.TECHNICAL3-1 Basic ElectricityFOCUS There are two general categories of electrical current: direct current and alternating current. Current is the movement or flow of electrons, normally from positive (+) to negative (−). The plus (+) or minus (−) measurements are known as polarity. Direct current (DC) travels in only one direction, whereas alternating current (AC) travels first in one direction and then in the other direction. A copper wire transmitting electricity acts like a hose transferring water. We use three common termswhen discussing electricity. Voltage is defined as electrical pressure—the amount of electrical force pushing electrons through a circuit. In principle, it is the same as pounds per square inch in a water pipe. Amperes (amps) are units of electrical flow, or volume. This measure is analogous to gallons per minute for water. The watt is the fundamental unit of electrical power. It is a rate unit, not a quantity. You obtain the wattage by multiplying the volts by the amperes.Figure 3-11 shows how all 8 bits of one character could travel down a parallel communication circuit. The circuit is physically made up of eight separate wires, wrapped in one outer coating. Each physical wire is used to send 1 bit of the 8-bit character. However, as far as the user is concerned (and the network for that matter), there is only one circuit; each ofTrimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 7474 Chapter 3 Physical Layer FIGURE 3-11 Parallel transmission of an 8-bit codeOne character consisting of 8 parallel bitsCircuit (eight copper wires) 0 1 0 1 0 1 1 0SenderFIGURE 3-12 Serial transmission of an 8-bit codeReceiverCircuit (one copper wire) One character consisting of 8 serial bitsSender0 1 1 0 1 0 1 0Receiverthe wires inside the cable bundle simply connects to a different part of the plug that connects the computer to the bundle of wire. Serial Serial transmission means that a stream of data is sent over a communication circuit sequentially in a bit-by-bit fashion, as shown in Figure 3-12. In this case, there is only one physical wire inside the bundle, and all data must be transmitted over that one physical wire. The transmitting device sends one bit, then a second bit, and so on, until all the bits are transmitted. It takes n iterations or cycles to transmit n bits. Thus, serial transmission is considerably slower than parallel transmission—eight times slower in the case of 8-bit ASCII (because there are 8 bits). Compare Figure 3-12 with Figure 3-11.3.4.3 Digital Transmission Digital transmission is the transmission of binary electrical or light pulses in that it only has two possible states, a 1 or a 0. The most commonly encountered voltage levels range from a low of +3/−3 to a high of +24/−24 volts. Digital signals are usually sent over wire of no more than a few thousand feet in length. All digital transmission techniques require a set of symbols (to define how to send a 1 and a 0) and the symbol rate (how many symbols will be sent per second). Figure 3-13 shows five types of digital transmission techniques. With unipolar signaling, the voltage is always positive or negative (like a DC current). Figure 3-13 illustrates a unipolar technique in which a signal of 0 volts (no current) is used to transmit a zero and a signal of +5 volts is used to transmit a 1. An obvious question at this point is this: If 0 volts means a zero, how do you send no data? This is discussed in detail in Chapter 4. For the moment, we will just say that there are ways to indicate when a message starts and stops, and when there are no messages to send, the sender and receiver agree to ignore any electrical signal on the line. To successfully send and receive a message, both the sender and receiver have to agree on how often the sender can transmit data—that is, on the symbol rate. For example, if the symbol rate on a circuit is 64 Kilo Hertz (KHz) (64,000 symbols per second), then the sender changes the voltage on the circuit once every 1∕64,000 of a second and the receiver must examine the circuit every 1∕64,000 of a second to read the incoming data.Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 75Digital Transmission of Digital Data 75FIGURE 3-13 Unipolar, bipolar, and Manchester signals (digital)0011010001000110100010001101000100011010001000110100010+5V Unipolar0V -5VBipolar: nonreturn to zero (NRZ) voltageBipolar: return to zero (RZ) voltage+5V 0V -5V+5V 0V -5V+5V Bipolar: alternate mark Inversion (AMI)0V -5V+2V Manchester encoding0V -2VIn bipolar signaling, the ones and zeros vary from a plus voltage to a minus voltage (like an AC current). The first bipolar technique illustrated in Figure 3-13 is called nonreturn to zero (NRZ) because the voltage alternates from +5 volts (a symbol indicating a 1) to −5 volts (a symbol indicating a 0) without ever returning to 0 volts. The second bipolar technique in this figure is called return to zero (RZ) because it always returns to 0 volts after each bit before going to +5 volts (the symbol for a 1) or −5 volts (the symbol for a 0). The third bipolar technique is called alternate mark inversion (AMI) because a 0 is always sent using 0 volts, but 1s alternate between +5 volts and −5 volts. AMI is used on T1 and T3 circuits. In Europe, bipolar signaling sometimes is called double current signaling because you are moving between a positive and negative voltage potential. In general, bipolar signaling experiences fewer errors than unipolar signaling because the symbols are more distinct. Noise or interference on the transmission circuit is less likely to cause the bipolar’s +5 volts to be misread as a −5 volts than it is to cause the unipolar’s 0 volts to be misread as a +5 volts. This is because changing the polarity of a current (from positive to negative, or vice versa) is more difficult than changing its magnitude.3.4.4 How Ethernet Transmits Data The most common technology used in LANs is Ethernet;2 if you are working in a computer lab on campus, you are most likely using Ethernet. Ethernet uses digital transmission over 2 Ifyou don’t know what Ethernet is, don’t worry. We will discuss Ethernet in Chapter 6.Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 7676 Chapter 3 Physical Layer either serial or parallel circuits, depending on which version of Ethernet you use. One version of Ethernet that uses serial transmission requires 1/10,000,000 of a second to send one symbol; that is, it transmits 10 million symbols (each of 1 bit) per second. This gives a data rate of 10 Mbps, and if we assume that there are 8 bits in each character, this means that about 1.25 million characters can be transmitted per second in the circuit. Ethernet uses Manchester encoding, which is a special type of bipolar signaling in which the signal is changed from high to low or from low to high in the middle of the signal. A change from high to low is used to represent a 0, whereas the opposite (a change from low to high) is used to represent a 1. See Figure 3-13. Manchester encoding is less susceptible to having errors go undetected, because if there is no transition in midsignal, the receiver knows that an error must have occurred.3.5 ANALOG TRANSMISSION OF DIGITAL DATA Telephone networks were originally built for human speech rather than for data. They were designed to transmit the electrical representation of sound waves, rather than the binary data used by computers. There are many occasions when data need to be transmitted over a voice communications network. Many people working at home still use a modem over their telephone line to connect to the Internet. The telephone system (commonly called POTS for plain old telephone service) enables voice communication between any two telephones within its network. The telephone converts the sound waves produced by the human voice at the sending end into electrical signals for the telephone network. These electrical signals travel through the network until they reach the other telephone and are converted back into sound waves. Analog transmission occurs when the signal sent over the transmission media continuously varies from one state to another in a wave-like pattern much like the human voice. Modems translate the digital binary data produced by computers into the analog signals required by voice transmission circuits. One modem is used by the transmitter to produce the analog signals and a second by the receiver to translate the analog signals back into digital signals. The sound waves transmitted through the voice circuit have three important characteristics (see Figure 3-14). The first is the height of the wave, called amplitude. Amplitude is measured in decibels (dB). Our ears detect amplitude as the loudness or volume of sound. Every sound wave has two parts, half above the zero amplitude point (i.e., positive) and half below (i.e., negative), and both halves are always the same height. The second characteristic is the length of the wave, usually expressed as the number of waves per second, or frequency. Frequency is expressed in hertz (Hz).3 Our ears detect frequency as the pitch of the sound. Frequency is the inverse of the length of the soundFIGURE 3-14 Sound waveAmplitude 0PhaseWavelength3 Hertzis the same as “cycles per second”; therefore, 20,000 Hertz is equal to 20,000 cycles per second. One hertz (HZ) is the same as 1 cycle per second. One kilohertz (KHZ) is 1,000 cycles per second (kilocycles), 1 megahertz (MHZ) is 1 million cycles per second (megacycles), and 1 gigahertz (GHZ) is 1 billion cycles per second.Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 77Analog Transmission of Digital Data 77wave, so that a high frequency means that there are many short waves in a 1-second interval, whereas a low frequency means that there are fewer (but longer) waves in 1 second. The third characteristic is the phase, which refers to the direction in which the wave begins. Phase is measured in the number of degrees (∘ ). The wave in Figure 3-14 starts up and to the right, which is defined as a 0∘ phase wave. Waves can also start down and to the right (a 180∘ phase wave), and in virtually any other part of the sound wave.3.5.1 Modulation When we transmit data through the telephone lines, we use the shape of the sound waves we transmit (in terms of amplitude, frequency, and phase) to represent different data values. We do this by transmitting a simple sound wave through the circuit (called the carrier wave) and then changing its shape in different ways to represent a 1 or a 0. Modulation is the technical term used to refer to these “shape changes.” There are three fundamental modulation techniques: amplitude modulation, frequency modulation, and phase modulation. Once again, the sender and receiver have to agree on what symbols will be used (what amplitude, frequency, and phase will represent a 1 and a 0) and on the symbol rate (how many symbols will be sent per second). Basic Modulation With amplitude modulation (AM) (also called amplitude shift keying [ASK]), the amplitude or height of the wave is changed. One amplitude is the symbol defined to be 0, and another amplitude is the symbol defined to be a 1. In the AM shown in Figure 3-15, the highest amplitude symbol (tallest wave) represents a binary 1 and the lowest amplitude symbol represents a binary 0. In this case, when the sending device wants to transmit a 1, it would send a high-amplitude wave (i.e., a loud signal). AM is more susceptible to noise (more errors) during transmission than is frequency modulation or phase modulation. Frequency modulation (FM) (also called frequency shift keying [FSK]) is a modulation technique whereby each 0 or 1 is represented by a number of waves per second (i.e., a different frequency). In this case, the amplitude does not vary. One frequency (i.e., a certain number of waves per second) is the symbol defined to be a 1, and a different frequency (a different number of waves per second) is the symbol defined to be a 0. In Figure 3-16, the higher frequency wave symbol (more waves per time period) equals a binary 1, and the lower frequency wave symbol equals a binary 0. Phase modulation (PM) (also called phase shift keying [PSK]) is the most difficult to understand. Phase refers to the direction in which the wave begins. Until now, the waves we have shown start by moving up and to the right (this is called a 0∘ phase wave). Waves can also start down and to the right. This is called a phase of 180∘ . With phase modulation, one phase symbol is defined to be a 0 and the other phase symbol is defined to be a 1. Figure 3-17FIGURE 3-15 Amplitude modulationTime001101000101234567891011Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 7878 Chapter 3 Physical Layer FIGURE 3-16 Frequency modulation001101000101,200 2,400 hertz hertzTimeFIGURE 3-17 Phase modulationTime1234567891011001101000101234567891011shows the case where a phase of 0∘ symbol is defined to be a binary 0 and a phase of 180∘ symbol is defined to be a binary 1. Sending Multiple Bits Simultaneously Each of the three basic modulation techniques (AM, FM, and PM) can be refined to send more than 1 bit at one time. For example, basic AM sends 1 bit per wave (or symbol) by defining two different amplitudes, one for a 1 and one for a 0. It is possible to send 2 bits on one wave or symbol by defining four different amplitudes. Figure 3-18 shows the case where the highest-amplitude wave is defined to be a symbol representing 2 bits, both 1s. The next highest amplitude is the symbol defined to mean first a 1 and then a 0, and so on. This technique could be further refined to send 3 bits at the same time by defining eight different symbols, each with different amplitude levels or 4 bits by defining 16 symbols, each with different amplitude levels, and so on. At some point, however, it becomes very difficult to differentiate between the different amplitudes. The differences are so small that even a small amount of noise could destroy the signal. This same approach can be used for FM and PM. Two bits could be sent on the same symbol by defining four different frequencies, one for 11, one for 10, and so on, or by defining four phases (0∘ , 90∘ , 180∘ , and 270∘ ). Three bits could be sent by defining symbolsFIGURE 3-18 Two-bit amplitude modulation0011010001001010110101123456789101111 10 01 00TimeThis data took 10 symbols with 1-bit amplitude modulationTrimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 79Analog Transmission of Digital Data 79with eight frequencies or eight phases (0∘ , 45∘ , 90∘ , 135∘ , 180∘ , 225∘ , 270∘ , and 315∘ ). These techniques are also subject to the same limitations as AM; as the number of different frequencies or phases becomes larger, it becomes difficult to differentiate among them. It is also possible to combine modulation techniques—that is, to use AM, FM, and PM techniques on the same circuit. For example, we could combine AM with four defined amplitudes (capable of sending 2 bits) with FM with four defined frequencies (capable of sending 2 bits) to enable us to send 4 bits on the same symbol. One popular technique is quadrature amplitude modulation (QAM). QAM involves splitting the symbol into eight different phases (3 bits) and two different amplitudes (1 bit), for a total of 16 different possible values. Thus, one symbol in QAM can represent 4 bits, while 256-QAM sends 8 bits per symbol. 64-QAM and 256-QAM are commonly used in digital TV services and cable modem Internet services. Bit Rate versus Baud Rate versus Symbol Rate The terms bit rate (i.e., the number bits per second transmitted) and baud rate are used incorrectly much of the time. They often are used interchangeably, but they are not the same. In reality, the network designer or network user is interested in bits per second because it is the bits that are assembled into characters, characters into words and, thus, business information. A bit is a unit of information. A baud is a unit of signaling speed used to indicate the number of times per second the signal on the communication circuit changes. Because of the confusion over the term baud rate among the general public, ITU-T now recommends the term baud rate be replaced by the term symbol rate. The bit rate and the symbol rate (or baud rate) are the same only when 1 bit is sent on each symbol. For example, if we use AM with two amplitudes, we send 1 bit on one symbol. Here, the bit rate equals the symbol rate. However, if we use QAM, we can send 4 bits on every symbol; the bit rate would be four times the symbol rate. If we used 64-QAM, the bit rate would be six times the symbol rate. Virtually all of today’s modems send multiple bits per symbol.3.5.2 Capacity of a Circuit The data capacity of a circuit is the fastest rate at which you can send your data over the circuit in terms of the number of bits per second. The data rate (or bit rate) is calculated by multiplying the number of bits sent on each symbol by the maximum symbol rate. As we discussed in the previous section, the number of bits per symbol depends on the modulation technique (e.g., QAM sends 4 bits per symbol). The maximum symbol rate in any circuit depends on the bandwidth available and the signal-to-noise ratio (the strength of the signal compared with the amount of noise in the circuit). The bandwidth is the difference between the highest and the lowest frequencies in a band or set of frequencies. The range of human hearing is between 20 Hz and 14,000 Hz, so its bandwidth is 13,880 Hz. The maximum symbol rate for analog transmission is usually the same as the bandwidth as measured in hertz. If the circuit is very noisy, the maximum symbol rate may fall as low as 50% of the bandwidth. If the circuit has very little noise, it is possible to transmit at rates up to the bandwidth. Digital transmission symbol rates can reach as high as two times the bandwidth for techniques that have only one voltage change per symbol (e.g., NRZ). For digital techniques that have two voltage changes per symbol (e.g., RZ, Manchester), the maximum symbol rate is the same as the bandwidth. Standard telephone lines provide a bandwidth of 4,000 Hz. Under perfect circumstances, the maximum symbol rate is therefore about 4,000 symbols per second. If we were to use basic AM (1 bit per symbol), the maximum data rate would be 4,000 bits per second (bps). If we were to use QAM (4 bits per symbol), the maximum data rate would be 4 bits per symbol × 4,000 symbols per second = 16,000 bps. A circuit with a 10 MHz bandwidth using 64-QAM could provide up to 60 Mbps.Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 8080 Chapter 3 Physical Layer3.5.3 How Modems Transmit Data The modem (an acronym for modulator/demodulator) takes the digital data from a computer in the form of electrical pulses and converts them into the analog signal that is needed for transmission over an analog voice-grade circuit. There are many different types of modems available today from dial-up modems to cable modems. For data to be transmitted between two computers using modems, both need to use the same type of modem. Fortunately, several standards exist for modems, and any modem that conforms to a standard can communicate with any other modem that conforms to the same standard. A modem’s data transmission rate is the primary factor that determines the throughput rate of data, but it is not the only factor. Data compression can increase throughput of data over a communication link by literally compressing the data. V.44, the ISO standard for data compression, uses Lempel–Ziv encoding. As a message is being transmitted, Lempel–Ziv encoding builds a dictionary of two-, three-, and four-character combinations that occur in the message. Anytime the same character pattern reoccurs in the message, the index to the dictionary entry is transmitted rather than sending the actual data. The reduction provided by V.44 compression depends on the actual data sent but usually averages about 6:1 (i.e., almost six times as much data can be sent per second using V.44 as without it).3.6 DIGITAL TRANSMISSION OF ANALOG DATA In the same way that digital computer data can be sent over analog telephone networks using analog transmission, analog voice data can be sent over digital networks using digital transmission. This process is somewhat similar to the analog transmission of digital data. A pair of special devices called codecs (code/decode) is used in the same way that a pair of modems is used to translate the data to send across the circuit. One codec is attached to the source of the signal (e.g., a telephone or the local loop at the end office) and translates the incoming analog voice signal into a digital signal for transmission across the digital circuit. A second codec at the receiver’s end translates the digital data back into analog data.3.6.1 Translating from Analog to Digital Analog voice data must first be translated into a series of binary digits before they can be transmitted over a digital circuit. This is done by sampling the amplitude of the sound wave at regular intervals and translating it into a binary number. Figure 3-19 shows an example where eight different amplitude levels are used (i.e., each amplitude level is represented by 3 bits). The top diagram shows the original signal, and the bottom diagram shows the digitized signal. A quick glance will show that the digitized signal is only a rough approximation of the original signal. The original signal had a smooth flow, but the digitized signal has jagged “steps.” The difference between the two signals is called quantizing error. Voice transmissions using digitized signals that have a great deal of quantizing error sound metallic or machinelike to the ear. There are two ways to reduce quantizing error and improve the quality of the digitized signal, but neither is without cost. The first method is to increase the number of amplitude levels. This minimizes the difference between the levels (the “height” of the “steps”) and results in a smoother signal. In Figure 3-19, we could define 16 amplitude levels instead of eight levels. This would require 4 bits (rather than the current 3 bits) to represent the amplitude, thus increasing the amount of data needed to transmit the digitized signal. No amount of levels or bits will ever result in perfect-quality sound reproduction, but in general, 7 bits (27 = 128 levels) reproduces human speech adequately. Music, on the other hand, typically uses 16 bits (216 = 65,536 levels).Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 81Digital Transmission of Analog Data 81After quantizing, samples are taken at specific points to produce amplitude modulated pulses. These pulses are then coded. Because we used eight pulse levels, we only need three binary 1 positions to code each pulse. If we had used 128 pulse amplitudes, then a 7-bit code plus one parity bit would be required.1111010001 = PAM level 1 010 = PAM level 2 011 = PAM level 3 100 = PAM level 4 101 = PAM level 5 110 = PAM level 6 111 = PAM level 7 000 = PAM level 8101Original wave 8 7 6 5 4 3 2 1 0Eight pulse amplitudesThe signal (original wave) is quantized into 128 pulse amplitudes (PAM). In this example we have used only eight pulse amplitudes for simplicity. These eight amplitudes can be depicted by using only a 3-bit code instead of the 8-bit code normally used to encode each pulse amplitude.8 7 6 5 4 3 2 1 0Eight pulse amplitudesFIGURE 3-19 Pulse amplitude modulation (PAM)Pulse amplitudes (PAM)000100001For digitizing a voice signal, 8,000 samples per second are taken. These 8,000 samples are then transmitted as a serial stream of 0s and 1s. In our case 8,000 samples times 3 bits per sample would require a 24,000 bps transmission rate. In reality, 8 bits per sample times 8,000 samples requires a 64,000 bps transmission rate.The second method is to sample more frequently. This will reduce the “length” of each “step,” also resulting in a smoother signal. To obtain a reasonable-quality voice signal, one must sample at least twice the highest possible frequency in the analog signal. You will recall that the highest frequency transmitted in telephone circuits is 4,000 Hz. Thus, the methods used to digitize telephone voice transmissions must sample the input voice signal at a minimum of 8,000 times per second. Sampling more frequently than this (called oversampling) will improve signal quality. RealNetworks.com, which produces Real Audio and other Web-based tools, sets its products to sample at 48,000 times per second to provide higher quality. The iPod and most CDs sample at 44,100 times per second and use 16 bits per sample to produce almost error-free music. Some other MP3 players sample less frequently and use fewer bits per sample to produce smaller transmissions, but the sound quality may suffer.3.6.2 How Telephones Transmit Voice Data When you make a telephone call, the telephone converts your analog voice data into a simple analog signal and sends it down the circuit from your home to the telephoneTrimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 8282 Chapter 3 Physical Layer company’s network. This process is almost unchanged from the one used by Bell when he invented the telephone in 1876. With the invention of digital transmission, the common carriers (i.e., the telephone companies) began converting their voice networks to use digital transmission. Today, all of the common carrier networks use digital transmission, except in the local loop (sometimes called the last mile), the wires that run from your home or business to the telephone switch that connects your local loop into the telephone network. This switch contains a codec that converts the analog signal from your phone into a digital signal. This digital signal is then sent through the telephone network until it hits the switch for the local loop for the person you are calling. This switch uses its codec to convert the digital signal used inside the phone network back into the analog signal needed by that person’s local loop and telephone. See Figure 3-20. There are many different combinations of sampling frequencies and numbers of bits per sample that could be used. For example, one could sample 4,000 times per second using 128 amplitude levels (i.e., 7 bits) or sample at 16,000 times per second using 256 levels (i.e., 8 bits). The North American telephone network uses pulse code modulation (PCM). With PCM, the input voice signal is sampled 8,000 times per second. Each time the input voice signal is sampled, 8 bits are generated.4 Therefore, the transmission speed on the digital circuit must be 64,000 bps (8 bits per sample × 8,000 samples per second) to transmit a voice signal when it is in digital form. Thus, the North American telephone network is built usingSenderTelephone NetworkCODECCODEC Digital SignalOriginal Analog Sound Wave101 001 111 000 100 010 101 000 011 010 111 000111 110LevelsReceiverLevels101Reproduced Analog Sound Wave100 011111010110001101000100101001111000100010101000011010111000011 010 001 000 101001111000100010101000011010111000FIGURE 3-204 SevenPulse amplitude modulation (PAM)of those bits are used to represent the voice signal, and 1 bit is used for control purposes.Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 83Digital Transmission of Analog Data 83millions of 64 Kbps digital circuits that connect via codecs to the millions of miles of analog local loop circuits into the users’ residences and businesses.3.6.3 How Instant Messenger Transmits Voice Data A 64 Kbps digital circuit works very well for transmitting voice data because it provides very good quality. The problem is that it requires a lot of capacity. Adaptive differential pulse code modulation (ADPCM) is the alternative used by IM and many other applications that provide voice services over lower-speed digital circuits. ADPCM works in much the same way as PCM. It samples incoming voice signals 8,000 times per second and calculates the same 8-bit amplitude value as PCM. However, instead of transmitting the 8-bit value, it transmits the difference between the 8-bit value in the last time interval and the current 8-bit value (i.e., how the amplitude has changed from one time period to another). Because analog voice signals change slowly, these changes can be adequately represented by using only 4 bits. This means that ADPCM can be used on digital circuits that provide only 32 Kbps (4 bits per sample × 8,000 samples per second = 32,000 bps). Several versions of ADPCM have been developed and standardized by the ITU-T. There are versions designed for 8 Kbps circuits (which send 1 bit 8,000 times per second) and 16 Kbps circuits (which send 2 bits 8,000 times per second), as well as the original 32 Kbps version. However, there is a trade-off here. Although the 32 Kbps version usually provides as good a sound quality as that of a traditional voice telephone circuit, the 8 Kbps and 16 Kbps versions provide poorer sound quality.3.6.4 Voice over Internet Protocol (VoIP) Voice over Internet Protocol (VoIP, pronounced “voyp”) is commonly used to transmit phone conversations over digital networks. VoIP is a relatively new standard that uses digital telephones with built-in codecs to convert analog voice data into digital data (see Figure 3-21). Because the codec is built into the telephone, the telephone transmits digital data and therefore can be connected directly into a local area network, in much the same manner as a typical computer. Because VoIP phones operate on the same networks as computers, we can reduce the amount of wiring needed; with VoIP, we need to operate and maintain only one network throughout our offices, rather than two separate networks—one for voice and one for data. However, this also means that data networks with VoIP phonesFIGURE 3-21 VoIP phone Source: Courtesy Cisco Systems, Inc. Unauthorized use not permittedTrimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 8484 Chapter 3 Physical Layer must be designed to operate in emergencies (to enable 911 calls) even when the power fails; they must have uninterruptable power supplies (UPS) for all network circuits. One commonly used VoIP standard is G.722 wideband audio, which is a version of ADPCM that operates at 64 Kbps. It samples 8,000 times per second and produces 8 bits per sample. Because VoIP phones are digital, they can also contain additional capabilities. For example, high-end VoIP phones often contain computer chips to enable them to download and install small software applications so that they can function in many ways like computers.3.7 IMPLICATIONS FOR MANAGEMENT In the past, networks used to be designed so that the physical cables transported data in the same form in which the data were created: Analog voice data generated by telephones used to be carried by analog transmission cables and digital computer data used to be carried by digital transmission cables. Today, it is simple to separate the different types of data (analog voice or digital computer) from the actual physical cables used to carry the data. In most cases, the cheapest and highest-quality media are digital, which means that most data today are transmitted in digital form. Thus, the convergence of voice and video and data at the physical layers is being driven primarily by business reasons: Digital is better. The change in physical layers also has implications for organizational structure. Voice data used to be managed separately from computer data because they use different types of networks. As the physical networks converge, so too do the organizational units responsible for managing the data. Today, more organizations are placing the management of voice telecommunications into their information systems organizations. This also has implications for the telecommunications industry. Over the past few years, the historical separation between manufacturers of networking equipment used in organizations and manufacturers of networking equipment used by the telephone companies has crumbled. There have been some big winners and losers in the stock market from the consolidation of these markets.SUMMARYCircuits Networks can be configured so that there is a separate circuit from each client to the host (called a point-to-point configuration) or so that several clients share the same circuit (a multipoint configuration). Data can flow through the circuit in one direction only (simplex), in both directions simultaneously (full duplex), or by taking turns so that data sometimes flow in one direction and then in the other (half duplex). A multiplexer is a device that combines several simultaneous low-speed circuits on one higher-speed circuit so that each low-speed circuit believes it has a separate circuit. In general, the transmission capacity of the high-speed circuit must equal or exceed the sum of the low-speed circuits.Communication Media Media are either guided, in that they travel through a physical cable (e.g., twisted pair wires, coaxial cable, or fiber-optic cable), or wireless, in that they are broadcast through the air (e.g., radio, microwave, or satellite). Among the guided media, fiber-optic cable can transmit data the fastest with the fewest errors and offers greater security but costs the most; twisted pair wire is the cheapest and most commonly used. The choice of wireless media depends more on distance than on any other factor; radio is cheapest for short distances, microwave is cheapest for moderate distances, and satellite is cheapest for long distances.Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 85Key terms 85Digital Transmission of Digital Data Digital transmission (also called baseband transmission) is done by sending a series of electrical (or light) pulses through the media. Digital transmission is preferred to analog transmission because it produces fewer errors; is more efficient; permits higher maximum transmission rates; is more secure; and simplifies the integration of voice, video, and data on the same circuit. With unipolar digital transmission, the voltage changes between 0 volts to represent a binary 0 and some positive value (e.g., +15 volts) to represent a binary 1. With bipolar digital transmission, the voltage changes polarity (i.e., positive or negative) to represent a 1 or a 0. Bipolar is less susceptible to errors. Ethernet uses Manchester encoding, which is a version of unipolar transmission.Analog Transmission of Digital Data Modems are used to translate the digital data produced by computers into the analog signals for transmission in today’s voice communication circuits. Both the sender and receiver need to have a modem. Data are transmitted by changing (or modulating) a carrier sound wave’s amplitude (height), frequency (length), or phase (shape) to indicate a binary 1 or 0. For example, in amplitude modulation, one amplitude is defined to be a 1 and another amplitude is defined to be a 0. It is possible to send more than 1 bit on every symbol (or wave). For example, with amplitude modulation, you could send 2 bits on each wave by defining four amplitude levels. The capacity or maximum data rate that a circuit can transmit is determined by multiplying the symbol rate (symbols per second) by the number of bits per symbol. Generally (but not always), the symbol rate is the same as the bandwidth, so bandwidth is often used as a measure of capacity. V.44 is a data compression standard that can be combined with any of the foregoing types of modems to reduce the amount of data in the transmitted signal by a factor of up to six. Thus, a V.92 modem using V.44 could provide an effective data rate of 56,000 × 6 = 336,000 bps.Digital Transmission of Analog Data Because digital transmission is better, analog voice data are sometimes converted to digital transmission. Pulse code modulation (PCM) is the most commonly used technique. PCM samples the amplitude of the incoming voice signal 8,000 times per second and uses 8 bits to represent the signal. PCM produces a reasonable approximation of the human voice, but more sophisticated techniques are needed to adequately reproduce more complex sounds such as music.KEY TERMS adaptive differential pulse code modulation (ADPCM), 83 American Standard Code for Information Interchange (ASCII), 72 amplitude modulation (AM), 77 amplitude shift keying (ASK), 77 amplitude, 76 analog transmission, 76 bandwidth, 79baud rate, 79 bipolar, 75 bit rate, 79 bits per second (bps), 79 carrier wave, 77 circuit, 61 circuit configuration, 62 coaxial cable, 67 codec, 61 coding scheme, 72 cycles per second, 76 data compression, 80 data rate, 79digital subscriber line (DSL), 65 digital transmission, 74 fiber-optic cable, 67 frequency division multiplexing (FDM), 64 frequency modulation (FM), 77 frequency shift keying (FSK), 77 frequency, 76 full-duplex transmission, 63guided media, 66 half-duplex transmission, 63 ISO 8859, 72 Kilo Hertz (KHz), 74 Lempel-Ziv encoding, 80 local loop, 82 logical circuit, 61 Manchester encoding, 76 microwave transmission, 69 modem, 80 multipoint circuit, 62Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 8686 Chapter 3 Physical Layer multiplexing, 64 parallel transmission, 73 phase, 77 phase modulation (PM), 77 phase shift keying (PSK), 77 physical circuit, 61 plain old telephone service (POTS), 76point-to-point circuit, 62 polarity, 73 pulse code modulation (PCM), 82 quadrature amplitude modulation (QAM), 79 quantizing error, 80 radio transmission, 69 retrain time, 63 satellite transmission, 70serial transmission, 74 simplex transmission, 63 statistical time division multiplexing (STDM), 64 switch, 82 symbol rate, 62 time division multiplexing (TDM), 64 turnaround time, 63twisted pair cable, 66 unicode, 72 unipolar, 74 V.44, 80 Voice over Internet Protocol (VoIP), 83 wavelength division multiplexing (WDM), 65 wireless media, 66QUESTIONS 1. How does a multipoint circuit differ from a point-to-point circuit? 2. Describe the three types of data flows. 3. Describe three types of guided media. 4. Describe four types of wireless media. 5. How do analog data differ from digital data? 6. Clearly explain the differences among analog data, analog transmission, digital data, and digital transmission. 7. Explain why most telephone company circuits are now digital. 8. What is coding? 9. Briefly describe three important coding schemes. 10. How are data transmitted in parallel? 11. What feature distinguishes serial mode from parallel mode? 12. How does bipolar signaling differ from unipolar signaling? Why is Manchester encoding more popular than either? 13. What are three important characteristics of a sound wave? 14. What is bandwidth? What is the bandwidth in a traditional North American telephone circuit? 15. Describe how data could be transmitted using amplitude modulation. 16. Describe how data could be transmitted using frequency modulation. 17. Describe how data could be transmitted using phase modulation. 18. Describe how data could be transmitted using a combination of modulation techniques. 19. Is the bit rate the same as the symbol rate? Explain. 20. What is a modem? 21. What is quadrature amplitude modulation (QAM).22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32.33. 34. 35. 36. 37.38. 39. 40.What is 64-QAM? What factors affect transmission speed? What is oversampling? Why is data compression so useful? What data compression standard uses Lempel–Ziv encoding? Describe how it works. Explain how pulse code modulation (PCM) works. What is quantizing error? What is the term used to describe the placing of two or more signals on a single circuit? What is the purpose of multiplexing? How does DSL (digital subscriber line) work? Of the different types of multiplexing, what distinguishes a. frequency division multiplexing (FDM)? b. time division multiplexing (TDM)? c. statistical time division multiplexing (STDM)? d. wavelength division multiplexing (WDM)? What is the function of inverse multiplexing (IMUX)? If you were buying a multiplexer, would you choose TDM or FDM? Why? Some experts argue that modems may soon become obsolete. Do you agree? Why or why not? What is the maximum capacity of an analog circuit with a bandwidth of 4,000 Hz using QAM? What is the maximum data rate of an analog circuit with a 10 MHz bandwidth using 64-QAM and V.44? What is the capacity of a digital circuit with a symbol rate of 10 MHz using Manchester encoding? What is the symbol rate of a digital circuit providing 100 Mbps if it uses bipolar NRz signaling? What is VoIP?Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 87Minicases 87EXERCISES A. Investigate the costs of dumb terminals, network computers, minimally equipped personal computers, and top-of-the-line personal computers. Many equipment manufacturers and resellers are on the Web, so it’s a good place to start looking. B. Investigate the different types of cabling used in your organization and where they are used (e.g., LAN, backbone network). C. Three terminals (T1 , T2 , T3 ) are to be connected to three computers (C1 , C2 , C3 ) so that T1 is connected to C1 , T2 to C2 , and T3 to C3 . All are in different cities. T1 and C1 are 1,500 miles apart, as are T2 and C2 , and T3 and C3 . The points T1 , T2 , and T3 are 25 miles apart, and the points C1 , C2 , and C3 also are 25 miles apart. If telephone lines cost $1 per mile, what is the line cost for three? D. Investigate different types of satellite communication services that are provided today. E. Draw how the bit pattern 01101100 would be sent using a. Single-bit AMb. Single-bit FM c. Single-bit PM d. Two-bit AM (i.e., four amplitude levels) e. Two-bit FM (i.e., four frequencies) f. Two-bit PM (i.e., four different phases) g. Single-bit AM combined with single-bit FM h. Single-bit AM combined with single-bit PM i. Two-bit AM combined with two-bit PM F. If you had to download a 20-page paper of 400 K (bytes) from your professor, approximately how long would it take to transfer it over the following circuits? Assume that control characters add an extra 10% to the message. a. Dial-up modem at 33.6 Kbps b. Cable modem at 384 Kbps c. Cable modem at 1.5 Mbps d. If the modem includes V.44 data compression with a 6:1 data compression ratio, what is the data rate in bits per second you would actually see in choice c?MINICASES I. Eureka! (Part 1) Eureka! is a telephone- and Internet-based concierge service that specializes in obtaining things that are hard to find (e.g., Super Bowl tickets, first-edition books from the 1500s, Fabergé eggs). It currently employs 60 staff members who collectively provide 24-hour coverage (over three shifts). They answer the phones and respond to requests entered on the Eureka! Web site. Much of their work is spent on the phone and on computers searching on the Internet. The company has just leased a new office building and is about to wire it. What media would you suggest the company install in its office and why? II. Eureka! (Part 2) Eureka! is a telephone- and Internet-based concierge service that specializes in obtaining things that are hard to find (e.g., Super Bowl tickets, first-edition books from the 1500s, Fabergé eggs). It currently employs 60 staff members who work 24 hours per day (over three shifts). Staff answer the phone and respond to requests entered on the Eureka! Web site. Much of their work is spent on the phone and on computers searching on the Internet. What type of connections should Eureka!consider from its offices to the outside world, in terms of phone and Internet? Outline the pros and cons of each alternative below and make a recommendation. The company has three alternatives: 1. Should the company use standard voice lines but use DSL for its data ($40 per month per line for both services)? 2. Should the company separate its voice and data needs, using standard analog services for voice but finding some advanced digital transmission services for data ($40 per month for each voice line and $300 per month for a circuit with 1.5 Mbps of data)? 3. Should the company search for all digital services for both voice and data ($60 per month for an all-digital circuit that provides two phone lines that can be used for two voice calls, one voice call and one data call at 64 Kbps, or one data call at 128 Kbps)? III. Eureka! (Part 3) Eureka! is a telephone- and Internet-based concierge service that specializes in obtaining things that are hard to find (e.g., Super Bowl tickets, first-edition books from the 1500s,Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 8888 Chapter 3 Physical Layer Fabergé eggs). It currently employees 60 staff members who work 24 hours per day (over three shifts). Staff members answer phone calls and respond to requests entered on the Eureka! Web site. Currently, each staff member has a desktop PC with two monitors and a twisted pair connection (Cat5e) that offers speeds up to 100 Mbps. Some employees made a suggestion to the CEO of Eureka! to upgrade their connection to a fiber-optic cable that can provide speeds up to 1 Gbps. What do you think about this idea? How easy (difficult) is it to change wiring from twisted pair to fiber optic? Can we use the same network cards in the PCs, or do we need to change them? How much would this change cost? IV. Speedy Package Speedy Package is a same-day package delivery service that operates in Chicago. Each package has a shipping label that is attached tothe package and is also electronically scanned and entered into Speedy’s data network when the package is picked up and when it is delivered. The electronic labels are transmitted via a device that operates on a cell phone network. 1. Assuming that each label is 1,000 bytes long, how long does it take to transmit one label over the cell network, assuming that the cell phone network operates at 144 kbps (144,000 bits per second) and that there are 8 bits in a byte? 2. If Speedy were to upgrade to the new, faster digital phone network that transmits data at 200 Kbps (200,000 bits per second), how long would it take to transmit a label? V. Boingo Reread Management Focus 3.2. What other alternatives can travelers consider? How is Boingo different from other companies offering hot spots, such as T-Mobile or AT&T?CASE STUDY NEXT-DAY AIR SERVICE See the Web site at www.wiley.com/college/fitzgerald.HANDS-ON ACTIVITY 3A Looking Inside Your Cable One of the most commonly used types of local network cable is Category 5 unshielded twisted pair cable, commonly called “Cat 5.” Cat 5 (and an enhanced version called Cat 5e) are used in Ethernet LANs. If you have installed a LAN in your house or apartment, you probably used Cat 5 or Cat 5e. Figure 3-22 shows a picture of a typical Cat 5 cable. Each end of the cable has a connector called an RJ-45 connector that enables the cable to be plugged into a computer or network device. If you look closely at the connector, you will see there are eight separate “pins.” You might think that this would mean the Cat 5 can transmit data in parallel, but it doesn’t do this. Cat 5 is used for serial transmission. If you have an old Cat 5 cable (or are willing to spend a few dollars to buy cheap cable), it is simple to take the connector off. Simply take a pair of scissors and cut through the cable a few inches from the connector. Figure 3-23 shows the same Cat 5 cable with the connector cut off. You can see why twisted pair is called twisted pair: A single Cat 5 cable contains four separate sets of twisted pair wires for a total of eight wires.FIGURE 3-22Cat 5 cableUnfortunately, this picture is in black and white so it is hard to see the different colors of the eight wires inside the cable. Figure 3-24 lists the different colors of the wires and what they are used for under the EIA/TIA 568B standard (the less common 568A standard uses the pins in different ways). One pair of wires (connected to pins 1 and 2) is usedTrimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 89Hands-On Activity 3A 89FIGURE 3-23Inside a Cat 5 cableSource: Courtesy of Belkin International, Inc.to transmit data from your computer into the network. When your computer transmits, it sends the same data on both wires; pin 1 (transmit+) transmits the data normally and pin 2 (transmit−) transmits the same data with reversed polarity. This way, if an error occurs, the hardware will likely detect a different signal on the two cables. For example, if there is a sudden burst of electricity with a positive polarity (or a negative polarity), it will change only one of the transmissions from negative to positive (or vice versa) and leave the other transmission unchanged. Electrical pulses generate a magnetic field that has very bad side effects on the other wires. To minimize this, the two transmit wires are twisted together so that the other wires in the cable receive both a positive and a negative polarity magnetic field from the wires twisted around each other, which cancel each other out.Color (EIA/TIA 568B standard)Pin numberFigure 3-24 also shows a separate pair of wires for receiving transmissions from the network (pin 3 [receive+] and pin 6 [receive−]). These wires work exactly the same way as transmit+ and transmit− but are used by the network to send data to your computer. You’ll notice that they are also twisted together in one pair of wires, even though they are not side by side on the connector. Figure 3-24 shows the pin functions from the viewpoint of your computer. If you think about it, you’ll quickly realize that the pin functions at the network end of the cable are reversed; that is, pin 1 is receive+ because it is the wire that the network uses to receive the transmit+ signal from your computer. Likewise, pin 6 at the network end is the transmit− wire because it is the wire on which your computer receives the reversed data signal. The separate set of wires for transmitting and receiving means that Cat 5 is designed for full-duplex transmission. It can send and receive at the same time because one set of wires is used for sending data and one set is used for receiving data. However, Cat 5 is not often used this way. Most hardware that uses Cat 5 is designed to operate in a half-duplex mode, even though the cable itself is capable of full duplex. You’ll also notice that the other four wires in the cable are not used. Yes, that’s right; they are simply wasted. Deliverable Find a Cat 5 or Cat 5e cable and record what color wires are used for each pin.Name1White with orange stripeTransmit +2Orange with white stripe or solid orangeTransmit –3White with green stripeReceive +4Blue with white stripe or solid blueNot used5White with blue stripeNot used6Green with white stripe or solid greenReceive –7White with brown stripe or solid brownNot used8Brown with white stripe or solid brownNot usedFIGURE 3-24Pin connection for Cat 5 at the computer endTrimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 9090 Chapter 3 Physical LayerHANDS-ON ACTIVITY 3B Making MP3 Files MP3 files are good examples of analog-to-digital conversion. It is simple to take an analog signal—such as your voice—and convert it into a digital file for transmission or playback. In this activity, we will show you how to record your voice and see how different levels of digital quality affect the sound. First, you need to download a sound editor and MP3 converter. One very good sound editor is Audacity—and it’s free. Go to audacity.sourceforge.net and download and install the audacity software. You will also need the plug-in called LAME (an MP3 encoder), which is also free and available at lame.sourceforge.net. Use Audacity to record music or your voice (you can use a cheap microphone). Audacity records in very high quality, but will produce MP3 files in whatever quality level you choose. Once you have the file recorded, you can edit the Preferences to change the File Format to use in saving the MP3 file. Audacity/LAME offers a wide range of qualities. Try recording at least three different quality levels. Forexample, for high quality, you could use 320 Kbps, which means the recording uses 320 Kbps of data per second. In other words, the number of samples per second times the number of bits per sample equals 320 Kbps. For regular quality, you could use 128 Kbps. For low quality, you could use 16 Kbps. Create each of these files and listen to them to hear the differences in quality produced by the quantizing error. The differences should be most noticeable for music. A recording at 24 Kbps is often adequate for voice, but music will require a better quality encoding. Deliverable 1. Produce three MP3 files of the same music or voice recording at three different quality levels. 2. List the size of each file. 3. Listen to each file and describe the quality differences you hear (if any).HANDS-ON ACTIVITY 3C Making a Cat 5e Patch Cable A patch cable is a cable that runs a short distance (usually less than 10 feet) that connects a device into a wall jack, a patch panel jack, or a device. If you have a desktop computer, you’re using a patch cable to connect it into your Ethernet LAN. Patch cables are relatively inexpensive (usually $10 or less), but compared to the cost of their materials, they are expensive (the materials usually cost less than $1). Because it is relatively easy to make a patch cable, many companies make their own in order to save money. To make your own patch cable, you will need a crimper, some Cat 5e cable, two RJ45 connectors, and a cable tester (optional). See Figure 3-25. 1. Using the cutter on the crimping tool, cut a desired length of Cat 5e cable. 2. Insert the end of the cable into the stripper and gently press on the cable while rotating it to remove the outer insulation of the cable. Be careful not to cut the twisted pairs inside. After removing the outer insulation, visually inspect the twisted pairs for damage. Do this on both ends of your cable. If any of the cables are damaged, you need to cut them and start over.3. Untwist the twisted pairs and straighten them. Once they are straightened, put them into this order: orange-white, orange, green-white, blue, blue-white, green, brown-white, brown. 4. Hold the cable in your right hand; the orange-white wire should be closest to you. Hold the RJ45 connector in your left hand with the little “handle” on the bottom. 5. Insert the wires inside the connector all the way to the end—you should be able to see the colors of the wires when you look at the front of the connector. Make sure that the wires don’t change order. The white insulation should be about 1/3 of the way inside the connector. (If you used the stripper on the tool properly, the length of the wires will be exactly as needed to fit to the RJ45 connector.) 6. Now you are ready to crimp the connector. Insert the RJ45 connector to the crimper and press really hard. This will push the gold contacts on the connector onto the twisted pairs.Trimsize Trim Size: 8in x 10inFitzergald c03.tex V2 - July 25, 2014 10:56 A.M. Page 91Hands-On Activity 3C 91CRIMPER Cutter Stripper CrimperCat 5e RJ45 ConnectorsCABLE TESTERFIGURE 3-25Tools and materials for making a patch cable7. Crimp the other end of the cable by repeating steps 4 through 7. 8. The final step is to test your cable. Turn on the cable tester and insert both ends of the patch cable into the tester. If you see the flashing light going down the indicators 1 through 8, not skipping any number or changing the order, you made a fullyfunctional patch cable. If you don’t have a cable tester, you can use the cable to connect your computer into an Ethernet LAN. If you’re able to use the LAN, the cable is working. Deliverable A working patch cable.Trimsize Trim Size: 8in x 10inFitzergald c04.tex V2 - July 25, 2014 9:18 A.M. Page 92CHAPTER 4 DATA LINK LAYER The data link layer (also called layer 2) is responsible for moving a message from one computer or network device to the next computer or network device in the overall path from sender or receiver. It controls the way messages are sent on the physical media. Both the sender and receiver have to agree on the rules, or protocols, that govern how they will communicate with each other. A data link protocol determines who can transmit at what time, where a message begins and ends, and how a receiver recognizes and corrects a transmission error. In this chapter, we discuss these processes as well as several important sources of errors.OBJECTIVESOUTLINE◾ ◾ ◾ ◾ ◾Understand the role of the data link layer Become familiar with two basic approaches to controlling access to the media Become familiar with common sources of error and their prevention Understand three common error detection and correction methods Become familiar with several commonly used data link protocols4.1 Introduction 4.2 Media Access Control 4.2.1 Contention 4.2.2 Controlled Access 4.2.3 Relative Performance 4.3 Error Control 4.3.1 Sources of Errors 4.3.2 Error Prevention 4.3.3 Error Detection4.3.4 Error Correction via Retransmission 4.3.5 Forward Error Correction 4.3.6 Error Control in Practice 4.4 Data Link Protocols 4.4.1 Asynchronous Transmission 4.4.2 Synchronous Transmission 4.5 Transmission Efficiency 4.6 Implications for Management Summary4.1 INTRODUCTION In Chapter 1, we introduced the concept of layers in data communications. The data link layer sits between the physical layer (hardware such as the circuits, computers, and multiplexers described in Chapter 3) and the network layer (which performs addressing and routing, as described in Chapter 5). The data link layer is responsible for sending and receiving messages to and from other computers. Its job is to reliably move a message from one computer over one circuit to the next computer where the message needs to go. The data link layer performs two main functions and therefore is often divided into two sublayers. The first sublayer (called the logical link control [LLC] sublayer) is the data link layer’s connection to the network layer above it. At the sending computer, the LLC sublayer software is responsible for communicating with the network layer software (e.g., IP) and for taking the network layer Protocol Data Unit (PDU)—usually an IP packet—and surrounding it with a data link layer PDU—often an Ethernet frame. At the receiving computer, the LLC sublayer software removes the data link layer PDU and passes the message it contains (usually an IP packet) to the network layer software. 92Trimsize Trim Size: 8in x 10inFitzergald c04.tex V2 - July 25, 2014 9:18 A.M. Page 93Media Access Control 93The second sublayer (called the media access control [MAC] sublayer) controls the physical hardware. The MAC sublayer software at the sending computer controls how and when the physical layer converts bits into the physical symbols that are sent down the circuit. At the receiving computer, the MAC sublayer software takes the data link layer PDU from the LLC sublayer, converts it into a stream of bits, and controls when the physical layer actually transmits the bits over the circuit. At the receiving computer, the MAC sublayer receives a stream of bits from the physical layer and translates it into a coherent PDU, ensures that no errors have occurred in transmission, and passes the data link layer PDU to the LLC sublayer. Both the sender and receiver have to agree on the rules or protocols that govern how their data link layers will communicate with each other. A data link protocol performs three functions: ◾ Controls when computers transmit (media access control) ◾ Detects and corrects transmission errors (error control) ◾ Identifies the start and end of a message by using a PDU (message delineation)4.2 MEDIA ACCESS CONTROL Media access control refers to the need to control when computers transmit. With point-to-point full-duplex configurations, media access control is unnecessary because there are only two computers on the circuit, and full duplex permits either computer to transmit at any time. Media access control becomes important when several computers share the same communication circuit, such as a point-to-point configuration with a half-duplex configuration that requires computers to take turns or a multipoint configuration in which several computers share the same circuit. Here, it is critical to ensure that no two computers attempt to transmit data at the same time—but if they do, there must be a way to recover from the problem. There are two fundamental approaches to media access control: contention and controlled access.4.2.1 Contention With contention, computers wait until the circuit is free (i.e., no other computers are transmitting) and then transmit whenever they have data to send. Contention is commonly used in Ethernet LANs. As an analogy, suppose that you are talking with some friends. People listen, and if no one is talking, they can talk. If you want to say something, you wait until the speaker is done and then you try to talk. Usually, people yield to the first person who jumps in at the precise moment the previous speaker stops. Sometimes two people attempt to talk at the same time, so there must be some technique to continue the conversation after such a verbal collision occurs.4.2.2 Controlled Access With controlled access controls the circuit and determines which clients can transmit at what time. There are two commonly used controlled access techniques: access requests and polling. With the access request technique, client computers that want to transmit send a request to transmit to the device that is controlling the circuit (e.g., the wireless access point). The controlling device grants permission for one computer at a time to transmit. When one computer has permission to transmit, all other computers wait until thatTrimsize Trim Size: 8in x 10inFitzergald c04.tex V2 - July 25, 2014 9:18 A.M. Page 9494 Chapter 4 Data Link Layer computer has finished, and then, if they have something to transmit, they use a contention technique to send an access request. The access request technique is like a classroom situation in which the instructor calls on the students who raise their hands. The instructor acts like the controlling access point. When they want to talk, students raise their hands and the instructor recognizes them so they can contribute. When they have finished, the instructor again takes charge and allows someone else to talk. And of course, just like in a classroom, the wireless access point can choose to transmit whenever it likes. Polling is the process of sending a signal to a client computer that gives it permission to transmit. With polling, the clients store all messages that need to be transmitted. Periodically, the controlling device (e.g., a wireless access point) polls the client to see if it has data to send. If the client has data to send, it does so. If the client has no data to send, it responds negatively, and the controller asks another client if it has data to send. There are several types of polling. With roll-call polling, the controller works consecutively through a list of clients, first polling client 1, then client 2, and so on, until all are polled. Roll-call polling can be modified to select clients in priority so that some get polled more often than others. For example, one could increase the priority of client 1 by using a polling sequence such as 1, 2, 3, 1, 4, 5, 1, 6, 7, 1, 8, 9. Typically, roll-call polling involves some waiting because the controller has to poll a client and then wait for a response. The response might be an incoming message that was waiting to be sent, a negative response indicating nothing is to be sent, or the full “time-out period” may expire because the client is temporarily out of service (e.g., it is malfunctioning or the user has turned it off). Usually, a timer “times out” the client after waiting several seconds without getting a response. If some sort of fail-safe time-out is not used, the circuit poll might lock up indefinitely on an out-of-service client. With hub polling (often called token passing), one device starts the poll and passes it to the next computer on the multipoint circuit, which sends its message and passes the poll to the next. That computer then passes the poll to the next, and so on, until it reaches the first computer, which restarts the process again.4.2.3 Relative Performance Which media access control approach is best: controlled access or contention? There is no simple answer. The key consideration is throughput—which approach will permit the most amount of user data to be transmitted through the network. In general, contention approaches work better than controlled approaches for small networks that have low usage. In this case, each computer can transmit when necessary, without waiting for permission. Because usage is low, there is little chance of a collision. In contrast, computers in a controlled access environment must wait for permission, so even if no other computer needs to transmit, they must wait for the poll. The reverse is true for large networks with high usage: Controlled access works better. In high-volume networks, many computers want to transmit, and the probability of a collision using contention is high. Collisions are very costly in terms of throughput because they waste circuit capacity during the collision and require both computers to retransmit later. Controlled access prevents collisions and makes more efficient use of the circuit, and although response time does increase, it does so more gradually (Figure 4-1). The key to selecting the best access control technique is to find the crossover point between controlled and contention. Although there is no one correct answer, because it depends on how many messages the computers in the network transmit, most experts believe that the crossover point is often around 20 computers (lower for busy computers,Trimsize Trim Size: 8in x 10inFitzergald c04.tex V2 - July 25, 2014 9:18 A.M. Page 95Error Control 95 ContentionLongResponse timeFIGURE 4-1 Relative response timesControlled accessShort LowHigh Traffichigher for less-busy computers). For this reason, when we build shared multipoint circuits like those often used in LANs or wireless LANs, we try to put no more than 20 computers on any one shared circuit.4.3 ERROR CONTROL Before learning the control mechanisms that can be implemented to protect a network from errors, you should realize that there are human errors and network errors. Human errors, such as a mistake in typing a number, usually are controlled through the application program. Network errors, such as those that occur during transmission, are controlled by the network hardware and software. There are two categories of network errors: corrupted data (data that have been changed) and lost data. Networks should be designed to (1) prevent, (2) detect, and (3) correct both corrupted data and lost data. We begin by examining the sources of errors and how to prevent them and then turn to error detection and correction. Network errors are a fact of life in data communications networks. Depending on the type of circuit, they may occur every few hours, minutes, or seconds because of noise on the lines. No network can eliminate all errors, but most errors can be prevented, detected, and corrected by proper design. Inter-Exchange Carriers (IXCs) that provide data transmission circuits provide statistical measures specifying typical error rates and the pattern of errors that can be expected on the circuits they lease. For example, the error rate might be stated as 1 in 500,000, meaning there is 1 bit in error for every 500,000 bits transmitted. Normally, errors appear in bursts. In a burst error, more than 1 data bit is changed by the error-causing condition. In other words, errors are not uniformly distributed in time. Although an error rate might be stated as 1 in 500,000, errors are more likely to occur as 100 bits every 50,000,000 bits. The fact that errors tend to be clustered in bursts rather than evenly dispersed is both good and bad. If the errors were not clustered, an error rate of 1 bit in 500,000 would make it rare for 2 erroneous bits to occur in the same character. Consequently, simple character-checking schemes would be effective at detecting errors. When errors are #ore or less evenly distrib#ted, it is not di#ficult to gras# the me#ning even when the error #ate is high, as it is in this #entence (1 charac#er in 20). But burst errors are the rule rather than the exception, often obliterating 100 or more bits at a time. This makes it more difficult to recover the meaning, so more reliance must be placed on error detection and correction methods. The positive side is that there are long periods of error-free transmission, meaning that very few messages encounter errors.Trimsize Trim Size: 8in x 10inFitzergald c04.tex V2 - July 25, 2014 9:18 A.M. Page 9696 Chapter 4 Data Link Layer4.3.1 Sources of Errors Line noise and distortion can cause data communication errors. The focus in this section is on electrical media such as twisted pair wire and coaxial cable, because they are more likely to suffer from noise than are optical media such as fiber-optic cable. In this case, noise is undesirable electrical signals (for fiber-optic cable, it is undesirable light). Noise is introduced by equipment or natural disturbances, and it degrades the performance of a communication circuit. Noise manifests itself as extra bits, missing bits, or bits that have been “flipped” (i.e., changed from 1 to 0 or vice versa). Figure 4-2 summarizes the major sources of error and ways to prevent them. The first six sources listed there are the most important; the last three are more common in analog rather than digital circuits. White noise or Gaussian noise (the familiar background hiss or static on radios and telephones) is caused by the thermal agitation of electrons and therefore is inescapable. Even if the equipment were perfect and the wires were perfectly insulated from any and all external interference, there still would be some white noise. White noise usually is not a problem unless it becomes so strong that it obliterates the transmission. In this case, the strength of the electrical signal is increased so it overpowers the white noise; in technical terms, we increase the signal-to-noise ratio. Impulse noise (sometimes called spikes) is the primary source of errors in data communications. It is heard as a click or a crackling noise and can last as long as 1∕100 of a second. Such a click does not really affect voice communications, but it can obliterate a group of data, causing a burst error. At 1.5 Mbps, 15,000 bits would be changed by a spike of 1∕100 of a second. Some of the sources of impulse noise are voltage changes in adjacent lines, lightning flashes during thunderstorms, fluorescent lights, and poor connections in circuits. Cross-talk occurs when one circuit picks up signals in another. A person experiences cross-talk during telephone calls when she or he hears other conversations in the background. It occurs between pairs of wires that are carrying separate signals, in multiplexed links carrying many discrete signals, or in microwave links in which one antenna picks up a minute reflection from another antenna. Cross-talk between lines increases with increased communication distance, increased proximity of the two wires, increased signal strength, and higher-frequency signals. Wet or damp weather can also increase cross-talk. Like white noise, cross-talk has such a low signal strength that it normally is not bothersome. Echoes are the result of poor connections that cause the signal to reflect back to the transmitting equipment. If the strength of the echo is strong enough to be detected, it causes errors. Echoes, like cross-talk and white noise, have such a low signal strength that they normally are not bothersome. Echoes can also occur in fiber-optic cables when connections between cables are not properly aligned. Attenuation is the loss of power a signal suffers as it travels from the transmitting computer to the receiving computer. Some power is absorbed by the medium or is lost beforeFIGURE 4-2 Sources of errors and ways to minimize themTrimsize Trim Size: 8in x 10inFitzergald c04.tex V2 - July 25, 2014 9:18 A.M. Page 97Error Control 97it reaches the receiver. As the medium absorbs power, the signal becomes weaker, and the receiving equipment has less and less chance of correctly interpreting the data. This power loss is a function of the transmission method and circuit medium. High frequencies lose power more rapidly than do low frequencies during transmission, so the received signal can thus be distorted by unequal loss of its component frequencies. Attenuation increases as frequency increases or as the diameter of the wire decreases. Intermodulation noise is a special type of cross-talk. The signals from two circuits combine to form a new signal that falls into a frequency band reserved for another signal. This type of noise is similar to harmonics in music. On a multiplexed line, many different signals are amplified together, and slight variations in the adjustment of the equipment can cause intermodulation noise. A maladjusted modem may transmit a strong frequency tone when not transmitting data, thus producing this type of noise. In general, errors are more likely to occur in wireless, microwove, or satellite transmission than transmission through cables. Therefore, error detection is more important when using radiated media than guided media. Impulse noise is the most frequent cause of errors in today’s networks. Unfortunately, as the next section describes, it could be very difficult to determine what caused this type of error.4.3.2 Error Prevention Obviously, error prevention is very important. There are many techniques to prevent errors (or at least reduce them), depending on the situation. Shielding (protecting wires by covering them with an insulating coating) is one of the best ways to prevent impulse noise, cross-talk, and intermodulation noise. Many different types of wires and cables are available with different amounts of shielding. In general, the greater the shielding, the more expensive the cable and the more difficult it is to install. Moving cables away from sources of noise (especially power sources) can also reduce impulse noise, cross-talk, and intermodulation noise. For impulse noise, this means avoiding lights and heavy machinery. Locating communication cables away from power cables is always a good idea. For cross-talk, this means physically separating the cables from other communication cables. Cross-talk and intermodulation noise are often caused by improper multiplexing. Changing multiplexing techniques (e.g., from FDM [Frequency Division Multiplexing] to TDM [Time Division Multiplexing]) or changing the frequencies or size of the guardbands in FDM can help. Many types of noise (e.g., echoes, white noise) can be caused by poorly maintained equipment or poor connections and splices among cables. This is particularly true for echo in fiber-optic cables, which is almost always caused by poor connections. The solution here is obvious: Tune the transmission equipment and redo the connections. To avoid attenuation, telephone circuits have repeaters or amplifiers spaced throughout their length. The distance between them depends on the amount of power lost per unit length of the transmission line. An amplifier takes the incoming signal, increases its strength, and retransmits it on the next section of the circuit. They are typically used on analog circuits such as the telephone company’s voice circuits. The distance between the amplifiers depends on the amount of attenuation, although 1- to 10-mile intervals are common. On analog circuits, it is important to recognize that the noise and distortion are also amplified, along with the signal. This means some noise from a previous circuit is regenerated and amplified each time the signal is amplified. Repeaters are commonly used on digital circuits. A repeater receives the incoming signal, translates it into a digital message, and retransmits the message. Because the message is recreated at each repeater, noise and distortion from the previous circuit are not amplified. This provides a much cleaner signal and results in a lower error rate for digital circuits.Trimsize Trim Size: 8in x 10inFitzergald c04.tex V2 - July 25, 2014 9:18 A.M. Page 9898 Chapter 4 Data Link LayerMANAGEMENT4-1 Finding the Source of Impulse NoiseFOCUSSeveral years ago, the University of Georgia radio station received FCC (Federal Communications Commission) approval to broadcast using a stronger signal. Immediately after the station started broadcasting with the new signal, the campus backbone network (BN) became unusable because of impulse noise. It took 2 days to link the impulse noise to the radio station, and when the radio station returned to its usual broadcast signal, the problem disappeared. However, this was only the first step in the problem. The radio station wanted to broadcast at full strength, andthere was no good reason for why the stronger broadcast should affect the BN in this way. After 2 weeks of effort, the problem was discovered. A short section of the BN ran above ground between two buildings. It turned out that the specific brand of outdoor cable we used was particularly tasty to squirrels. They had eaten the outer insulating coating off of the cable, making it act like an antennae to receive the radio signals. The cable was replaced with a steel-coated armored cable so the squirrels could not eat the insulation. Things worked fine when the radio station returned to its stronger signal.4.3.3 Error Detection It is possible to develop data transmission methodologies that give very high error-detection performance. The only way to do error detection is to send extra data with each message. These error-detection data are added to each message by the data link layer of the sender on the basis of some mathematical calculations performed on the message (in some cases, error-detection methods are built into the hardware itself). The receiver performs the same mathematical calculations on the message it receives and matches its results against the error-detection data that were transmitted with the message. If the two match, the message is assumed to be correct. If they don’t match, an error has occurred. In general, the larger the amount of error-detection data sent, the greater the ability to detect an error. However, as the amount of error-detection data is increased, the throughput of useful data is reduced, because more of the available capacity is used to transmit these error-detection data and less is used to transmit the actual message itself. Therefore, the efficiency of data throughput varies inversely as the desired amount of error detection is increased. Three well-known error-detection methods are parity checking, checksum, and cyclic redundancy checking. Parity Checking One of the oldest and simplest error-detection methods is parity. With this technique, one additional bit is added to each byte in the message. The value of this additional parity bit is based on the number of 1s in each byte transmitted. This parity bit is set to make the total number of 1s in the byte (including the parity bit) either an even number or an odd number. Figure 4-3 gives an example. A little thought will convince you that any single error (a switch of a 1 to a 0, or vice versa) will be detected by parity, but it cannot determine which bit was in error. You will know an error occurred, but not what the error was. But if two bits are switched, the parity check will not detect any error. It is easy to see that parity can detect errors only when an odd number of bits have been switched; any even number of errors cancel one another out. Therefore, the probability of detecting an error, given that one has occurred, is only about 50%. Many networks today do not use parity because of its low error-detection rate. When parity is used, protocols are described as having odd parity or even parity.Trimsize Trim Size: 8in x 10inFitzergald c04.tex V2 - July 25, 2014 9:18 A.M. Page 99Error Control 99FIGURE 4-3 Using parity for error detectionChecksum With the checksum technique, a checksum (typically 1 byte) is added to the end of the message. The checksum is calculated by adding the decimal value of each character in the message, dividing the sum by 255, and using the remainder as the checksum. The receiver calculates its own checksum in the same way and compares it with the transmitted checksum. If the two values are equal, the message is presumed to contain no errors. Use of checksum detects close to 95% of the errors for multiple-bit burst errors. Cyclic Redundancy Check One of the most popular error-checking schemes is cyclic redundancy check (CRC). It adds 8, 16, 24, or 32 bits to the message. With CRC, a message is treated as one long binary number, P. Before transmission, the data link layer (or hardware device) divides P by a fixed binary number, G, resulting in a whole number, Q, and a remainder, R/G. So, P∕G = Q + R∕G. For example, if P = 58 and G = 8, then Q = 7 and R = 2. G is chosen so that the remainder, R, will be either 8 bits, 16 bits, 24 bits, or 32 bits.1 The remainder, R, is appended to the message as the error-checking characters before transmission. The receiving hardware divides the received message by the same G, which generates an R. The receiving hardware checks to ascertain whether the received R agrees with the locally generated R. If it does not, the message is assumed to be in error. Cyclic redundancy check performs quite well. The most commonly used CRC codes are CRC-16 (a 16-bit version), CRC-CCITT (another 16-bit version), and CRC-32 (a 32-bit version). The probability of detecting an error is 100% for all errors of the same length as the CRC or less. For example, CRC-16 is guaranteed to detect errors if 16 or fewer bits are affected. If the burst error is longer than the CRC, then CRC is not perfect but is close to it. CRC-16 will detect about 99.998% of all burst errors longer than 16 bits, whereas CRC-32 will detect about 99.99999998% of all burst errors longer than 32 bits.4.3.4 Error Correction via Retransmission Once error has been detected, it must be corrected. The simplest, most effective, least expensive, and most commonly used method for error correction is retransmission. With retransmission, a receiver that detects an error simply asks the sender to retransmit the message until it is received without error. This is often called Automatic Repeat reQuest (ARQ). There are two types of ARQ: stop-and-wait and continuous. Stop-and-Wait ARQ With stop-and-wait ARQ, the sender stops and waits for a response from the receiver after each data packet. After receiving a packet, the receiver sends either an1 CRCis actually more complicated than this because it uses polynominal division, not “normal” division as illustrated here. Ross Willams provides an excellent tutorial on CRC at www.ross.net/crc/crcpaper.html.Trimsize Trim Size: 8in x 10inFitzergald c04.tex V2 - July 25, 2014 9:18 A.M. Page 100100 Chapter 4 Data Link Layer FIGURE 4-4 Stop-and-wait ARQ (Automatic Repeat reQuest). ACK = acknowledgment; NAK = negative acknowledgmentSenderReceiverPacket ANo errors detectedACKPacket BErrors detectedNAKPacket BNo errors detectedACKacknowledgment (ACK), if the packet was received without error, or a negative acknowledgment (NAK), if the message contained an error. If it is an NAK, the sender resends the previous message. If it is an ACK, the sender continues with the next message. Stop-and-wait ARQ is by definition a half-duplex transmission technique (Figure 4-4). Continuous ARQ With continuous ARQ, the sender does not wait for an acknowledgment after sending a message; it immediately sends the next one. Although the messages are being transmitted, the sender examines the stream of returning acknowledgments. If it receives an NAK, the sender retransmits the needed messages. The packets that are retransmitted may be only those containing an error (called Link Access Protocol for Modems [LAP-M]) or may be the first packet with an error and all those that followed it (called Go-Back-N ARQ). LAP-M is better because it is more efficient. Continuous ARQ is by definition a full-duplex transmission technique, because both the sender and the receiver are transmitting simultaneously. (The sender is sending messages, and the receiver is sending ACKs and NAKs.) Figure 4-5 illustrates the flow of messages on a communication circuit using continuous ARQ. Continuous ARQ is sometimes called sliding window because of the visual imagery the early network designers used to think about continuous ARQ. Visualize the sender having a set of messages to send in memory stacked in order from first to last. Now imagine a window that moves through the stack from first to last. As a message is sent, the window expands to cover it, meaning that the sender is waiting for an ACK for the message. As an ACK is received for a message, the window moves forward, dropping the message out of the bottom of the window, indicating that it has been sent and received successfully. Continuous ARQ is also important in providing flow control, which means ensuring that the computer sending the message is not transmitting too quickly for the receiver. For example, if a client computer was sending information too quickly for a server computer to store a file being uploaded, the server might run out of memory to store the file. By using ACKs and NAKs, the receiver can control the rate at which it receives information. With stop-and-wait ARQ, the receiver does not send an ACK until it is ready to receive more packets. In continuous ARQ, the sender and receiver usually agree on the size of the sliding window. Once the sender has transmitted the maximum number of packets permitted in the sliding window, it cannot send any more packets until the receiver sends an ACK.Trimsize Trim Size: 8in x 10inFitzergald c04.tex V2 - July 25, 2014 9:18 A.M. Page 101Error Control 101FIGURE 4-5 Continuous ARQ (Automatic Repeat reQuest). ACK = acknowledgment; NAK = negative acknowledgmentSenderReceiverPacket ANo errors detectedPacket BNo errors detectedACK AErrors detectedPacket CACK BPacket DNo errors detectedNAK CNo errors detectedPacket CACK DACK CTECHNICAL4-1 How Forward Error Correction WorksFOCUS To see how error-correcting codes work, consider the example of a forward error checking code in Figure 4-6, called a Hamming code, after its inventor, R. W. Hamming. This code is a very simple approach, capable of correcting 1-bit errors. More sophisticated techniques (e.g., Reed–Solomon) are commonly used today, but this will give you a sense of how they work. The Hamming code associates even parity bits with unique combinations of data bits. With a 4-data-bit code as an example, a character might be represented by thes hard disk(s) Increase Circuit Capacity • Upgrade to a faster circuit • Increase the number of circuits Reduce Network Demand • Move files from the server to the client computers • Increase the use of disk caching on client computers • Change user behaviorTrimsize Trim Size: 8in x 10inFitzergald c07.tex V2 - July 2, 2014 8:44 P.M. Page 210210 Chapter 7 Wired and Wireless Local Area Networks settings can have a significant effect on performance. The specific settings differ by NOS but often include things such as the amount of memory used for disk caches, the number of simultaneously open files, and the amount of buffer space. Hardware One obvious solution if your network server is overloaded is to buy a second server (or more). Each server is then dedicated to supporting one set of application software (e.g., one handles email, another handles the financial database, and another stores customer records). The bottleneck can be broken by carefully identifying the demands each major application software package places on the server and allocating them to different servers. Sometimes, however, most of the demand on the server is produced by one application that cannot be split across several servers. In this case, the server itself must be upgraded. The first place to start is with the server’s CPU. Faster CPUs mean better performance. If you are still using an old computer as a LAN server, this may be the answer; you probably need to upgrade to the latest and greatest. Clock speed also matters: the faster, the better. Most computers today also come with CPU-cache (a very fast memory module directly connected to the CPU). Increasing the cache will increase CPU performance. A second bottleneck is the amount of memory in the server. Increasing the amount of memory increases the probability that disk caching will work, thus increasing performance. A third bottleneck is the number and speed of the hard disks in the server. The primary function of the LAN server is to process requests for information on its disks. Slow hard disks give slow network performance. The obvious solution is to buy the fastest disk drive possible. Even more important, however, is the number of hard disks. Each computer hard disk has only one read/write head, meaning that all requests must go through this one device. By using several smaller disks rather than one larger disk (e.g., five 200 gigabyte disks rather than one 1 terabyte disk), you now have more read/write heads, each of which can be used simultaneously, dramatically improving throughput. A special type of disk drive called RAID (redundant array of inexpensive disks) builds on this concept and is typically used in applications requiring very fast processing of large volumes of data, such as multimedia. Of course, RAID is more expensive than traditional disk drives, but costs have been shrinking. RAID can also provide fault tolerance, which is discussed in Chapter 11. Several vendors sell special-purpose network servers that are optimized to provide extremely fast performance. Many of these provide RAID and use symmetric multiprocessing (SMP) that enables one server to use up to 16 CPUs. Such servers provide excellent performance but cost more (often $5,000 to $15,000).7.6.2 Improving Circuit Capacity Improving the capacity of a circuit means increasing the volume of simultaneous messages the circuit can transmit from network clients to the server(s). One obvious approach is simply to buy a bigger circuit. For example, if you are now using a 100Base-T LAN, upgrading to 1000Base-T LAN will improve capacity. Or if you have 802.11n, then upgrade to 802.11ac. You can also add more circuits so that there are two or even three separate high speed circuits between busy parts of the network, such as the core backbone and the data center. Most Ethernet circuits can be configured to use full duplex (see Chapter 4), which is often done for backbones and servers. Another approach is to segment the network. If there is more traffic on a LAN than it can handle, you can divide the LAN into several smaller segments. Breaking a network intoTrimsize Trim Size: 8in x 10inFitzergald c07.tex V2 - July 2, 2014 8:44 P.M. Page 211Implications for Management 211smaller parts is called network segmentation. In a wired LAN, this means adding one of more new switches and spreading the computers across these new switches. In a wireless LAN, this means adding more access points that operate on different channels. If wireless performance is significantly worse than expected, then it is important to check for sources of interference near the AP and the computers such as Bluetooth devices and cordless phones.7.6.3 Reducing Network Demand One way to reduce network demand is to move files to client computers. Heavily used software packages that continually access and load modules from the network can place unusually heavy demands on the network. Although user data and messages are often only a few kilobytes in size, today’s software packages can be many megabytes in size. Placing even one or two such applications on client computers can greatly improve network performance (although this can create other problems, such as increasing the difficulty in upgrading to new versions of the software). Most organizations now provide both wired and wireless networks, so another way to reduce demand is to shift it from wired networks to wireless networks, or vice versa, depending on which has the problem. For example, you can encourage wired users to go wireless or install wired Ethernet jacks in places where wireless users often sit. Because the demand on most LANs is uneven, network performance can be improved by attempting to move user demands from peak times to off-peak times. For example, early morning and after lunch are often busy times when people check their email. Telling network users about the peak times and encouraging them to change their habits may help; however, in practice, it is often difficult to get users to change. Nonetheless, finding one application that places a large demand on the network and moving it can have a significant impact (e.g., printing several thousand customer records after midnight).7.7 IMPLICATIONS FOR MANAGEMENT As LANs have standardized on Ethernet, local area networking technology has become a commodity in most organizations. As with most commodities, the cost of LAN equipment (i.e., network interface cards, cabling, hubs, and switches) has dropped significantly. Some vendors are producing high-quality equipment, whereas some new entrants into the market are producing equipment that meets standards but creates opportunities for problems because it lacks the features of more established brands. It becomes difficult for LAN managers to explain to business managers why it’s important to purchase higher-quality, more expensive equipment when low-cost “standardized” equipment is available. Most SOHO users are moving quickly to wireless, which means that wired Ethernet is a legacy technology for small SOHO devices; there is little profit to be made in this market, and many manufacturers will abandon it. We have seen a rise in the sales of wireless cards for desktop computers, and desktop computers targeted for sale to the SOHO market will come standard with wireless cards in addition to the wired Ethernet cards we see today. Decreasing costs for LAN equipment also means that network-enabled microprocessorcontrolled devices that have not normally been thought of as computer technology are becoming less expensive. Therefore, we have seen devices such as copiers turned into network printers and scanners. This trend will increase as electrical appliances such as refrigerators and ovens become network devices. Don’t laugh; networked vending machines are already in use.Trimsize Trim Size: 8in x 10inFitzergald c07.tex V2 - July 2, 2014 8:44 P.M. Page 212212 Chapter 7 Wired and Wireless Local Area NetworksSUMMARYLAN Components The NIC enables the computer to be physically connected to the network and provides the physical layer connection among the computers. Wired LANs use UTP wires, STP wires, and/or fiber-optic cable. Network hubs and switches provide an easy way to connect network cables and act as repeaters. Wireless NICs provide radio connections to access points that link wireless computers into the wired network. The NOS is the software that performs the functions associated with the data link and the network layers and interacts with the application software and the computer’s own operating system. Every NOS provides two sets of software: one that runs on the network server(s) and one that runs on the network client(s). A network profile specifies what resources on each server are available for network use by other computers and which devices or people are allowed what access to the network. Ethernet (IEEE 802.3) Ethernet, the most commonly used LAN protocol in the world, uses a contention-based media access technique called CSMA/CD. There are many different types of Ethernet that use different network cabling (e.g., 10Base-T, 100Base-T, 1000Base-T, and 10 GbE). Switches are preferred to hubs because they are significantly faster.Wireless Ethernet Wireless Ethernet (often called Wi-Fi) is the most common type of wireless LAN. It uses physical star/logical bus topology with both controlled and contention-based media access control. 802.11n, the newest version, provides 200 Mbps over three channels or faster speeds over fewer channels.Best Practice LAN Design Most organizations install 100Base-T or 10/100/1000 Ethernet as their primary LAN and also provide wireless LANs as an overlay network. For SOHO networks, the best LAN choice may be wireless. Designing the data center and e-commerce edge often uses specialized equipment such as server farms, load balancers, virtual servers, SANs, and UPS.Improving LAN Performance Every LAN has a bottleneck, a narrow point in the network that limits the number of messages that can be processed. Generally speaking, the bottleneck will lie in either the network server or a network circuit. Server performance can be improved with a faster NOS that provides better disk caching, by buying more servers and spreading applications among them or by upgrading the server’s CPU, memory, NIC, and the speed and number of its hard disks. Circuit capacity can be improved by using faster technologies (100Base-T rather than 10Base-T), by adding more circuits, and by segmenting the network into several separate LANs by adding more switches or access points. Overall LAN performance also can be improved by reducing the demand for the LAN by moving files off the LAN, moving users from wired Ethernet to wireless or vice versa, and by shifting users’ routines.KEY TERMS access point (AP), 188 Active Directory Service (ADS), 190 association, 196 beacon frame, 196 bottleneck, 208 bus topology, 191cable plan, 202 cabling, 202 Carrier Sense Multiple Access with Collision Detection (CSMA/CD), 194Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA), 196 channel, 203 clear to send (CTS), 197 collision, 194collision detection (CD), 194 collision domain, 192 cut-through switching, 193 directional antenna, 189 distributed coordination function (DCF), 196Trimsize Trim Size: 8in x 10inFitzergald c07.tex V2 - July 2, 2014 8:44 P.M. Page 213Questions 213domain controller, 190 dual-band access point, 198 Ethernet, 191 fiber-optic cable, 186 forwarding table, 192 fragment-free switching, 194 frame, 191 hub, 187 IEEE 802.3, 191 IEEE 802.11, 196 latency, 193 layer-2 switch, 193 lightweight directory access protocol (LDAP), 190 load balancer, 205 logical topology, 191 MAC address filtering, 200 network-attached storage (NAS), 206 network interface card (NIC), 186network operating system (NOS), 190 network profile, 191 network segmentation, 211 network server, 190 omnidirectional antenna, 189 overlay network, 201 physical carrier sense method, 196 physical topology, 191 point coordination function (PCF), 197 port, 187 power over Ethernet (POE), 188 probe frame, 196 redundant array of inexpensive disks (RAID), 210 request to send (RTS), 197 server virtualization, 205shielded twisted-pair (STP), 186 site survey, 202 small-office, home-office (SOHO), 187 storage area network (SAN), 206 store and forward switching, 194 switching, 187 switch, 187 switched Ethernet, 201 symmetric multi-processing (SMP), 210 topology, 191 twisted-pair cable, 188 unshielded twisted-pair (UTP) cable, 186 virtual carrier sense, 197 Warchalking, 199 wardriving, 199 Wi-Fi 196Wi-Fi Protected Access (WPA), 200 WiGig, 199 Wired Equivalent Privacy (WEP), 199 Wireless LAN (WLAN), 187 10Base-T, 195 100Base-T, 195 1000Base-T, 195 10/100/1000 Ethernet, 195 1 GbE, 195 10 GbE, 195 40 GbE, 195 802.11ac, 198 802.11ad, 199 802.11a, 198 802.11b, 198 802.11g, 198 802.11i, 200 802.11n, 198QUESTIONS 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.14. 15.Define local area network. Describe at least three types of servers. Describe the basic components of a wired LAN. Describe the basic components of a wireless LAN. What types of cables are commonly used in wired LANs? Compare and contrast category 5 UTP, category 5e UTP, and category 5 STP. What is a cable plan and why would you want one? What does a NOS do? What are the major software parts of a NOS? How does wired Ethernet work? How does a logical topology differ from a physical topology? Briefly describe how CSMA/CD works. Explain the terms 100Base-T, 100Base-F, 1000Base-T, 10 GbE, and 10/100/1000 Ethernet. How do Ethernet switches know where to send the frames they receive? Describe how switches gather and use this knowledge. Compare and contrast cut-through, store and forward, and fragment-free switching. Compare and contrast the two types of antennas.16. How does Wi-Fi perform media access control? 17. How does Wi-Fi differ from shared Ethernet in terms of topology, media access control, and error control, Ethernet frame? 18. Explain how CSMA/CA DCF works. 19. Explain how CSMA/CA PCF works. 20. Explain how association works in WLAN. 21. What are the best practice recommendations for wired LAN design? 22. What are the best practice recommendations for WLAN design? 23. What is a site survey, and why is it important? 24. How do you decide how many APs are needed and where they should be placed for best performance? 25. How does the design of the data center differ from the design of the LANs intended to provide user access to the network? 26. What are three special purpose devices you might find in a data center and what do they do? 27. What is a bottleneck and how can you locate one? 28. Describe three ways to improve network performance on the server.Trimsize Trim Size: 8in x 10inFitzergald c07.tex V2 - July 2, 2014 8:44 P.M. Page 214214 Chapter 7 Wired and Wireless Local Area Networks 29. Describe three ways to improve network performance on circuits. 30. Many of the wired and wireless LANs share the same or similar components (e.g., error control). Why?31. As WLANs become more powerful, what are the implications for networks of the future? Will wired LANS still be common or will we eliminate wired offices?EXERCISES A. Survey the LANs used in your organization. Are they wireless or wired? Why? B. Document one LAN (or LAN segment) in detail. What devices are attached, what cabling is used, and what is the topology? What does the cable plan look like?C. You have been hired by a small company to install a simple LAN for its 18 Windows computers. Develop a simple LAN and determine the total cost; that is, select the cables, hubs/switches, and so on, and price them. Assume that the company has no network today and that the office is small enough that you don’t need to worry about cable length.MINICASES I. Designing a New Ethernet One important issue in designing Ethernet lies in making sure that if a computer transmits a frame, any other computer that attempts to transmit at the same time will be able to hear the incoming frame before it stops transmitting, or else a collision might go unnoticed. For example, assume that we are on earth and send an Ethernet frame over a very long piece of category 5 wire to the moon. If a computer on the moon starts transmitting at the same time as we do on earth and finishes transmitting before our frame arrives at the moon, there will be a collision, but neither computer will detect it; the frame will be garbled, but no one will know why. So, in designing Ethernet, we must make sure that the length of cable in the LAN is shorter than the length of the shortest possible frame that can be sent. Otherwise, a collision could go undetected. a. Let’s assume that the smallest possible message is 64 bytes (including the 33-byte overhead). If we use 100Base-T, how long (in meters) is a 64-byte message? While electricity in the cable travels a bit slower than the speed of light, once you include delays in the electrical equipment in transmitting and receiving the signal, the effective speed is only about 40 million meters per second. (Hint: First calculate the number of seconds it would take to transmit the frame then calculate the number of meters the signal would travel in that time, and you have the total length of the frame.)b. If we use 10 GbE, how long (in meters) is a 64-byte frame? c. The answer in part b is the maximum distance any single cable could run from a switch to a computer in an Ethernet LAN. How would you overcome the problem implied by this? II. Pat’s Petunias You have been called in as a network consultant by your cousin Pat, who operates a successful mail-order flower business. She is moving to a new office and wants to install a network for her telephone operators, who take phone calls and enter orders into the system. The number of operators working varies depending on the time of day and day of the week. On slow shifts, there are usually only 10 operators, whereas at peak times, there are 50. She has bids from different companies to install (1) Wi-Fi or (2) a switched Ethernet 100Base-T network. She wants you to give her some sense of the relative performance of the alternatives so she can compare that with their different costs. What would you recommend? III. Eureka! Eureka! is a telephone- and Internet-based concierge service that specializes in obtaining things that are hard to find (e.g., Super Bowl tickets, first-edition books from the 1500s, and Fabergé eggs). It currently employs staff members who work 24 hours per day (over three shifts), with usually 5–7 staff members working at any given time. Staff members answer the phone and respond to requests entered on the Eureka! Web site. Much of their workTrimsize Trim Size: 8in x 10inFitzergald c07.tex V2 - July 2, 2014 8:44 P.M. Page 215Case Study 215IV.V.VI.VII.is spent on the phone and on computers searching of networking her family’s three computers together. on the Internet. They have just leased a new office She and her husband are both consultants and work and are about to wire it. They have bids from different out of their home in the evenings and a few days a companies to install (a) a 100Base-T network or (b) a month (each has a separate office with a computer, Wi-Fi network. What would you recommend? Why? plus a laptop from the office that are occasionally Tom’s Home Automation Your cousin Tom runs used). The kids also have a computer in their playa small construction company that builds custom room. They have several options for networking their houses. He has just started a new specialty service home: that he is offering to other builders on a subcontracta. Wire the two offices and playroom with Ethernet ing basis: home automation. He provides a complete Cat 5e cable and put in a 1000Base-T switch for service of installing cable in all the rooms in which $40 the homeowner wants data access and installs the b. Install one Wi-Fi access point ($85) and put Wi-Fi necessary networking devices to provide a LAN that cards in the three computers for $50 each (their will connect all the computers in the house to the laptops already have Wi-Fi) Internet. Most homeowners choose to install a DSL c. Any combination of these options or cable modem Internet connection that provides a What would you recommend? Justify your recom12–25 Mbps from the house to the Internet. Tom has mendation. come to you for advice about whether he should con- VIII. Ubiquitous Offices Ubiquitous Offices provides tinue to offer wiring services (which often cost $50 temporary office space in cities around the country. per room) or whether wireless is a better direction. They have a standard office layout that is a single What type of LAN would you recommend? floor with outside dimensions of 150 feet wide by 150 Sally’s Shoes Sally Smith runs a shoe store in the feet long. The interior is drywall offices. They have mall that is about 30 feet by 50 feet in size, including 1000Base-T but want to add wireless access as well. a small office and a storage area in the rear. The store How many access points would you buy, and where has one inventory computer in the storage area and would you put them? Draw the office and show where one computer in the office. She is replacing the two the access points would go. cash registers with computers that will act as cash reg- IX. ABC Warehouse ABC Warehouse is a single-floor isters but will also be able to communicate with the facility with outside dimensions of 100 feet wide by inventory computer. Sally wants to network the com350 feet long. The interior is open, but there are puters with a LAN. What sort of LAN design would large metal shelving units throughout the building you recommend? Draw a picture. to hold all the goods in the warehouse. How many South West State University South West State Uniaccess points would you buy, and where would you versity installed a series of four Wi-Fi omnidirecput them? Draw the warehouse and show where the tional APs spread across the ceiling of the main access points would go. floor of its library. The main floor has several large, X. Metro Motel Metro Motel is a four-story motel on open areas plus two dozen or so small offices spread the outskirts of town. The outside dimensions of the around the outside walls. The WLAN worked well for motel are 60 feet wide by 200 feet long, and each story one semester, but now more students are using the is about 10 feet high. Each floor (except the ground network, and performance has deteriorated signififloor) has 20 rooms (drywall construction). There is cantly. What would you recommend that they do? Be a central corridor with rooms on both sides. How sure to support your recommendations. many access points would you buy, and where would Household Wireless Your sister is building a new you put them? Draw the motel and show where the two-story house (which measures 50 feet long by 30 access points would go. feet wide) and wants to make sure that it is capableCASE STUDY NEXT-DAY AIR SERVICE See the companion Web site at www.wiley.com/college/fitzgeraldTrimsize Trim Size: 8in x 10inFitzergald c07.tex V2 - July 2, 2014 8:44 P.M. Page 216216 Chapter 7 Wired and Wireless Local Area NetworksHANDS-ON ACTIVITY 7A Windows Peer-to-Peer Networking In this chapter, we’ve discussed two types of LANs: peer-to-peer LANs and dedicated server LANs. This activity will show you how to set up a peer-to-peer LAN for your house or apartment. We first describe file sharing and then discuss printer sharing. Windows File Sharing Windows file sharing enables you to select folders on your computer that you can permit other users on your LAN to read and write. There are three steps to creating a shared folder. Step 1. Give your computer an Application Layer Name within a Workgroup 1. Go to Settings → Control Panel → System 2. Click on the Computer Name Tab 3. Click Change 4. Type in a New Computer Name and Workgroup Name. All computers must have the same workgroup name to share files. Each computer within a workgroup must have a unique name. Step 2. Enable File Sharing 1. Go to Settings → Control Panel → Windows Firewall 2. Click on the Exceptions tab 3. Make sure the box in front of File and Printer Sharing is checkedOnce you have created a shared folder, other computers in your workgroup can access it. Move to another computer on your LAN and repeat steps 1 and 2 (and step 3 if you like). Now you can use the shared folder: 1. Double click on My Network Places 2. Double click on a shared folder 3. Create a file (e.g., using Word) and save it in your shared directory 4. Move the file(s) across computers in your workgroup If you do this on your home network, anyone with access to your network can access the files in your shared folder. It is much safer to turn off file sharing unless you intentionally want to use it (see Step 2 and make sure the boxes are not checked if you want to prevent file sharing). Windows Printer Sharing In the same way you can share folders with other computers in your workgroup, you can share printers. To share a printer, do the following on the computer that has the printer connected to it: 1. Go to Settings → Control Panel → Printers and Faxes 2. Right click on a printer and select Properties 3. Click on the Sharing tab 4. Click on Share This Printer4. Go to Settings → Control Panel → Network ConnectionsOnce you have done this, you can move to other computers on your LAN and install the network on them:5. Right click on the LAN connection and click Properties1. Go to Settings → Control Panel → Printers and Faxes6. Ensure that the box in front of File and Printer Sharing for Microsoft Networks is checked.2. Click on Add a PrinterStep 3. Create the Shared Folder4. Click the Radio Button in front of A Network Printer and click Next1. Open Windows Explorer 2. Create a new folder 3. Right click the folder name and choose Properties 4. Click on the Sharing tab 5. Avoid the Network Wizard and make sure the boxes in front of Share This Folder and Allow Network Users to change are checked3. In the Welcome to Add a Printer Wizard, click Next5. Click the Radio Button in front of Browse for a Printer and click Next 6. Select the Network Printer and click Next 7. You can make this printer your default printer or not, and click NextTrimsize Trim Size: 8in x 10inFitzergald c07.tex V2 - July 2, 2014 8:44 P.M. Page 217Hands-On Activity 7B 217Deliverables 1. Do a print screen of Windows Explorer to show the folders on another computer you can access.2. Do a print screen to show you can print to the networked printer.HANDS-ON ACTIVITY 7B Tracing Ethernet TracePlus Ethernet is a network monitoring tool that enables you to see how much network capacity you are using. If you’re working from home with a broadband Internet connection, you’ll be surprised how little of the Ethernet capacity you’re actually using. Your LAN connection is probably 1000 Mbps (or 300 Mbps if you’re using wireless), while the broadband connection into your home or apartment is only 20–30 Mbps. The bottleneck is the broadband connection, so you use only a small percentage of your LAN capacity. 1. Download and install TracePlus. A free trial version of TracePlus is available at TUCowsFIGURE 7-16TracePlus(www.tucows.com/preview/230332/TracePlus-Ether net?q=Traceplus+). The URL might move, so if this link doesn’t work, search on the Internet. Just be careful what you download and where you get it. We like TuCows and Cnet as safe download sites, but you can also check Norton SafeWeb for their ratings of sites (safeweb.norton.com). 2. Start TracePlus and monitor your network. Leave it open in one part of your screen as you surf the Internet, check email, or watch a video. Figure 7-16 shows a sample TracePlus screen while I was surfing the Internet and checking email with Microsoft Outlook. The dashboard at the bottom of the screen showsTrimsize Trim Size: 8in x 10inFitzergald c07.tex V2 - July 2, 2014 8:44 P.M. Page 218218 Chapter 7 Wired and Wireless Local Area Networks the real-time usage. You can see that when I took this screen shot, my computer was sending and receiving about 100 packets per second (or if you prefer, 100 frames per second), for a total of just under 1 Mbps of data. This is less than 1% of the total Ethernet bandwidth (i.e., network capacity), because I have switched to 100Base-T on my computer. The dashboard also shows that I’m sending and receiving almost no broadcast or multicast data. Immediately above the dashboard is the summary for my computer (192.1681.104 (Alan 2)). In the 2 minutes and 30 seconds of monitoring, my computer received 1,875 inbound packets with a total of 2.236 megabytes of data for a utilization of 0.118%. The average bits per second was about 118 Kbps. During the same time, my computer sent slightly fewer outbound packets (1,232), but the average packet was about 10 times smaller because the total amount of data sent was only 218,569 bytes. Most packets were 128–511 bytes in length, but some were smaller and some were larger. The Nodes tab in the upper right of the screen shows the nodes on my network that TracePlus can monitor.These include my computer (Alan2), a second computer (Orcelia), my router (192.168.1.1), a wireless access point (Aironet) with two connections (into the LAN and out to the wireless LAN), and the Indiana University VPN server (because I had my VPN connected; Chapter 11 discusses VPNs). You can see that all of these devices have little utilization (under 1%), as well as the total number of packets these devices have sent and received. You can click through the other tabs in this area to see the packet distribution. The panel on the left of the screen shows additional information about the types of packets, errors, and packet sizes. Deliverables 1. How many packets can your computer send and receive? 2. What is the total data rate on your network? 3. What is your network utilization?HANDS-ON ACTIVITY 7C Wardriving and Warwalking Wireless LANS are often not secure. It is simple to bring your laptop computer into a public area and listen for wireless networks. This is called wardriving (if you are in a car) or warwalking (if you’re walking). As long as you do not attempt to use any networks without authorization, wardriving and warwalking are quite legal. There are many good software tools available for wardriving. My favorites are Net Surveyor (available from http:// nutsaboutnets.com/netsurveyor-wifi-scanner/) or Wireless NetView (available from http://download.cnet.com/ WirelessNetView/3000-2162_4-191039.html). Both are simple to use, yet powerful. The first step is to download and install the software on a laptop computer that has wireless capability. Just be careful what you download as these sites sometimes have other software on the same page. Once you have installed the software, simply walk or drive to a public area and start it up. Figure 7-17 shows an example of the 13 networks I discovered in my home town of Bloomington, Indiana, when I parked my car in a neighborhood near the university that has a lot of rental houses and turned on Wireless Netview. I rearranged the order of the columns in Netview, so your screen might look a little different than mine when you first start up Netview. NetView displays information about each wireless LAN it discovers. The first column shows the name of theWLAN (the ssid). The second column shows the last signal strength it detected, whereas the third column shows the average signal strength. I used NetView from my parked car on the street, so the signal strengths are not strong, and because I wasn’t moving, the average signal strength and the last signal strength are the same. You can examine the “PHY Types” column and see that most APs are 802.11n, although there are three older 802.11g APs. Values in the “Maximum Speed” column are quite variable. There are some newer 8011.n APs that are running at the top speed of 450 Mbps. Some 802.11n APs provide 144 Mbps, which suggests that these WLANs are likely to be older APs that are not capable of the higher speeds of newer 802.11n APs. You can also see that there are three 802.11g APs that provide only 54 Mbps. The “Channel” column shows a fairly even distribution of channels 1, 6, and 11, indicating that most users have configured them to use the three standard channels. However, the owner of the FatJesse WLAN has configured it to run on channel 2. All the APs in this neighborhood were secure. They had implemented encryption. However, the very first AP (2WIRE935) was using WEP, which is a very old standard. It’s better than nothing, but its owner should switch to WPA or WPA2.Trimsize Trim Size: 8in x 10inFitzergald c07.tex V2 - July 2, 2014 8:44 P.M. Page 219Hands-On Activity 7C 219FIGURE 7-17WLANs in a neighborhood in Bloomington, IndianaFIGURE 7-18WLANs at Indiana UniversityFigure 7-18 shows a similar screen capture in the Kelley School of Business at Indiana University. If you look closely, you’ll see that this only shows a small subset of the APs that were visible to NetView. There were more than50 APs in total. In this case, you’ll see a more standard configuration, with virtually all the APs being 802.11n running at 216 Mbps in channels 1, 6, and 12 (although you can’t see the ones in channel 12). All the APs on the IU SecureTrimsize Trim Size: 8in x 10inFitzergald c07.tex V2 - July 2, 2014 8:44 P.M. Page 220220 Chapter 7 Wired and Wireless Local Area Networks or eduaroam are secured, whereas attwifi and IU Guest are not secured. You can also see two rogue APs (both have names starting with “PD”) that are 802.11g, WEP-secured, running at 54 Mbps.2. What different versions of 802.11 did you see, what were their maximum speeds, and what channels were used?Deliverables4. What is your overall assessment of the WLAN usage with respect to security?3. How many networks were secure?1. Capture a snapshot for the screen having all the information related to the various network connections that you collected during your warwalking.HANDS-ON ACTIVITY 7D Apollo Residence Access LAN Design Apollo is a luxury residence hall that will serve honor students at your university. The residence will be eight floors, with a total of 162 two-bedroom, one-bathroom apartments. The building is steel-frame construction with concrete on the outside and drywall on the inside that measures 240 feet by 150 feet. The first floor has an open lobby with a seating area and separate office area, whereas the second floor has meeting rooms. Floors 3–8 each contain apartments and a large open lounge with a seating area (see Figure 7-19). Visio files for the residence are available on this book’s Web site.Your team was hired to design a network for this residence hall. To improve its quality of service, the university has decided to install wired network connections in each apartment so that every room can have an IP phone as well as network access. For security reasons, the university wants two separate networks: a LAN that will provide secure wired and wireless access to all registered students and a public wireless LAN that will provide Internet access to visitors. This activity focuses only on the design of the LAN that will be provided on each floor of six floors with apartmentsStairsNOTE: The room sizes are not on exact scaleA103A102Floors 3–8 - LayoutNorth ElevatorsA101Wiring ClosetCustodian RoomA127A126A125A104A105A114A106A113A107A108A109A115A124A116A123A112A117A122A111A118A121LobbyA110 South ElevatorsFIGURE 7-19StairsPlans for Floors 3–8 of Apollo ResidenceA119A120Trimsize Trim Size: 8in x 10inFitzergald c07.tex V2 - July 2, 2014 8:44 P.M. Page 221Hands-On Activity 7D 221Ethernet Hub and SwitchesPrice (each)Ethernet 100Base-T 8 port Switch30Ethernet 100Base-T 16 port Switch70Ethernet 100Base-T 24 port Switch80Ethernet 100Base-T 48 port Switch130Ethernet 10/100/1000Base-T 8 port Switch70Ethernet 10/100/1000Base-T 16 port Switch130Ethernet 10/100/1000Base-T 24 port Switch Ethernet 10/100/1000Base-T 48 port Switch200 300Upgrade any switch to include POE Cable (Including Installation)75 Price (per drop)UTP Cat 5e (1000Base-T or slower)50UTP Cat 6 (1000Base-T or slower) STP Cat 5e (1000Base-T or slower)60 60Wireless Access Points802.11 wireless access point 802.11 wireless access point with POEFIGURE 7-20Price (each) 60 120LAN equipment price list(floors 3–8). Do not consider floors 1 and 2 at this point; we will add those in the Hands-On Activity at the end of the next chapter. We have not yet discussed how to design a building backbone or campus backbone, so just assume that the backbone will connect into a LAN switch using one 100Base-T or 1000Base-T. Deliverables 1. Design the network for this residence hall and draw where the network equipment would be placed (use the floor plans provided).2. Specify the products in your design and provide their cost and the total cost of the network. There are two options for specifying product. Option 1 is to use the generic LAN equipment list in Figure 7-20. Option 2 is to use CDW (www.cdw.com) to find LAN equipment. If you use CDW, you must use only Cisco devices (to ensure quality).Trimsize Trim Size: 8in x 10inFitzergald c08.tex V2 - July 25, 2014 10:07 A.M. Page 222CHAPTER 8 BACKBONE NETWORKS This chapter examines backbone networks (BNs) that are used in the distribution layer (within-building backbones) and the core layer (campus backbones). We discuss the three primary backbone architectures and the recommended best practice design guidelines on when to use them. The chapter ends with a discussion of how to improve BN performance and of the future of BNs.OBJECTIVESOUTLINE◾ ◾ ◾ ◾ ◾ ◾Understand the Internetworking devices used in BNs Understand the switched backbone architecture Understand the routed backbone architecture Understand Virtual LAN architecture Understand the best practice recommendations for backbone design Be aware of ways to improve BN performance8.1 8.2 8.3 8.4 8.5Introduction Switched Backbones Routed Backbones Virtual LANs The Best Practice Backbone Design8.6 Improving Backbone Performance 8.6.1 Improving Device Performance 8.6.2 Improving Circuit Capacity 8.6.3 Reducing Network Demand 8.7 Implications for Management Summary8.1 INTRODUCTION Chapter 6 outlined the seven major components in a network (see Figure 6.1). Chapter 7, on LANs, described how to design the LANs that provide user access to the network as well as the LANs in the data center and e-commerce edge. This chapter focuses on the next two major network architecture components: the backbone networks that connect the access LANs with a building (called the distribution layer) and the backbone networks that connect the different buildings on one enterprise campus (called the core layer). Backbones used to be built with special technologies, but today most BNs use highspeed Ethernet. There are two basic components to a BN: the network cable and the hardware devices that connect other networks to the BN. The cable is essentially the same as that used in LANs, except that it is often fiber optic to provide higher data rates. Fiber optic is also used when the distances between the buildings on an enterprise campus are farther apart than the 100 meters that standard twisted-pair cable can reach. The hardware devices can be computers or special-purpose devices that just transfer messages from one network to another. These include switches, routers, and VLAN switches. Switches operate at the data link layer. These are the same layer-2 switches discussed in Chapter 7 in that they use the data link layer address to forward packets between network segments. They learn addresses by reading the source and destination addresses.222Trimsize Trim Size: 8in x 10inFitzergald c08.tex V2 - July 25, 2014 10:07 A.M. Page 223Switched Backbones 223Routers operate at the network layer. They connect two different TCP/IP subnets. Routers are the “TCP/IP gateways” that we first introduced in Chapter 5. Routers strip off the data link layer packet, process the network layer packet, and forward only those messages that need to go to other networks on the basis of their network layer address. Routers may be special-purpose devices or special network modules in other devices (e.g., wireless access points for home use often include a built-in router). In general, they perform more processing on each message than switches and therefore operate more slowly. VLAN switches are a special combination of layer-2 switches and routers. They are complex devices intended for use in large networks that have special requirements. We discuss these in Section 8.4. In the sections that follow, we describe the three basic BN architectures and discuss at which layer they are often used. We assume that you are comfortable with the material on TCP/IP in Chapter 5; if you are not, you may want to go back and review Section 5.6 of the chapter, entitled “TCP/IP Example,” before you continue reading. We then explain the best practice design guidelines for the distribution layer and the core layer and discuss how to improve performance.8.2 SWITCHED BACKBONES Switched backbones are probably the most common type of BN used in the distribution layer (i.e., within a building); most new building BNs designed today use switched backbones. Switched backbone networks use a star topology with one switch at its center. Figure 8-1 shows a switched backbone connecting a series of LANs. There is a switch serving each LAN (access layer) that is connected to the backbone switch at the bottom of the figure (distribution layer). Most organizations now use switched backbones in which all network devices for one part of the building are physically located in the same room, often in a rack of equipment. This has the advantage of placing all network equipment in one place for easy maintenance and upgrade, but it does require more cable. In most cases, the cost of the cable is only a small part of the overall cost to install the network, so the cost is greatly outweighed by the simplicity of maintenance and the flexibility it provides for future upgrades. The room containing the rack of equipment is sometimes called the main distribution facility (MDF) or central distribution facility (CDF). Figure 8-2 shows a photo of an MDF room at Indiana University. Figure 8-3 shows the equipment diagram of this same room. The cables from all computers and devices in the area served by the MDF (often hundreds of cables) are run into the MDF room. Once in the room, they are connected into the various devices. The devices in the rack are connected among themselves using very short cables called patch cables. With rack-mounted equipment, it becomes simple to move computers from one LAN to another. Usually, all the computers in the same general physical location are connected to the same switch and thus share the capacity of the switch. Although this often works well, it can cause problems if many of the computers on the switch are high-traffic computers. For example, if all the busy computers on the network are located in the upper left area of the figure, the switch in this area may become a bottleneck. With an MDF, all cables run into the MDF. If one switch becomes overloaded, it is straightforward to unplug the cables from several high-demand computers from the overloaded switch and plug them into one or more less-busy switches. This effectively spreads the traffic around the network more efficiently and means that network capacity is no longer tied to the physical location of the computers; computers in the same physical area can be connected into different network segments.Trimsize Trim Size: 8in x 10inFitzergald c08.tex V2 - July 25, 2014 10:07 A.M. Page 224Switchh itcSwSwitch itc SwSwitchh itc SwSwitchh224 Chapter 8 Backbone NetworksSwitchFIGURE 8-1Rack-mounted switched backbone network architectureSometimes a chassis switch is used instead of a rack. A chassis switch enables users to plug modules directly into the switch. Each module is a certain type of network device. One module might be a 16-port 100Base-T switch, another might be a router, whereas another might be a 4-port 1000Base-F switch, and so on. The switch is designed to hold a certain number of modules and has a certain internal capacity, so that all the modules can be active at one time. For example, a switch with four 1000Base-T switches (with 24 ports each) and one 1000Base-F port would have to have an internal switching capacity of at least 97 Gbps ([4 × 24 × 1 Gbps] + [1 × 1Gbps]). The key advantage of chassis switches is their flexibility. It becomes simple to add new modules with additional ports as the LAN grows and to upgrade the switch to use new technologies. For example, if you want to add gigabit Ethernet, you simply lay the cable and insert the appropriate module into the chassis switch.Trimsize Trim Size: 8in x 10inFitzergald c08.tex V2 - July 25, 2014 10:07 A.M. Page 225Switched Backbones 225FIGURE 8-2 An MDF with rackmounted equipment. A layer-2 chassis switch with five 100Base-T modules (center of photo) connects to four 24-port 100Base-T switches. The chassis switch is connected to the campus backbone using 1000Base-F over fiber-optic cable. The cables from each room are wired into the rear of the patch panel (shown at the top of the photo), with the ports on the front of the patch panel labeled to show which room is which. Patch cables connect the patch panel ports to the ports on the switches. Source: Photo courtesy of the author, Alan DennisLayer-2 Chassis Switch Serial (1 Port)100Base-T (1 Port)100Base-T (1 Port)100Base-T (1 Port)100Base-T (1 Port)100Base-T (1 Port)EmptyEmpty1000Base-F (1 Port)To Building Backbone24-port 100Base-T SwitchLAN24-port 100Base-T SwitchLAN24-port 100Base-T SwitchLAN24-port 100Base-T SwitchLANFIGURE 8-3MDF network diagramTrimsize Trim Size: 8in x 10inFitzergald c08.tex V2 - July 25, 2014 10:07 A.M. Page 226226 Chapter 8 Backbone NetworksMANAGEMENT8-1 Switched Backbones at Indiana UniversityFOCUSAt Indiana University we commonly use switched backbones in our buildings. Figure 8-4 shows a typical design. Each floor in the building has a set of switches and access points that serve the LANs on that floor. Each of these LANs and WLANs are connected into a switch for that floor, thus forming a switched backbone on each floor. Typically, we use switched 100Base-T within each floor.The switch forming the switched backbone on each floor is then connected into another switch in the basement, which provides a switched backbone for the entire building. The building backbone is usually a higher speed network running over fiber-optic cable (e.g., 100Base-F or 1 GbE). This switch, in turn, is connected into a high-speed router that leads to the campus backbone (a routed backbone design).Third FloorWireless APLANLAN SwitchSwitchSwitchLANWLANSwitchSecond FloorWireless APLAN SwitchSwitchLAN SwitchLANWLANSwitchFirst FloorWireless APLANBasementSwitchFIGURE 8-4SwitchLAN SwitchLANWLANSwitchTo Campus Backbone SwitchRouterSwitched backbones at Indiana University8.3 ROUTED BACKBONES Routed backbones move packets along the backbone on the basis of their network layer address (i.e., layer-3 address). Routed backbones are sometimes called subnetted backbones or hierarchical backbones and are most commonly used to connect different buildings on the same enterprise campus backbone network (i.e., at the core layer). Figure 8-5 illustrates a routed backbone used at the core layer. A routed backbone is the basic backbone architecture we used to illustrate how TCP/IP worked in Chapter 5.Trimsize Trim Size: 8in x 10inFitzergald c08.tex V2 - July 25, 2014 10:07 A.M. Page 227Routed Backbones 227There are a series of LANs (access layer) connected to a switched backbone (distribution layer). Each backbone switch is connected to a router. Each router is connected to a core router (core layer). These routers break the network into separate subnets. The LANs in one building are a separate subnet from the LANs in a different building. Message traffic stays within each subnet unless it specifically needs to leave the subnet to travel elsewhere on the network, in which case the network layer address (e.g., TCP/IP) is used to move the packet. For example, in a switched backbone, a broadcast message (such as an ARP) would be sent to every single computer in the network. A routed backbone ensures that broadcastComputersRouter Distribution Layer Backbone SwitchInternetAccess Layer Switches ComputersRouter Distribution Layer Backbone Switch Core RouterAccess Layer Switches ComputersRouterDistribution Layer Backbone Switch Access Layer SwitchesFIGURE 8-5Routed backbone architectureTrimsize Trim Size: 8in x 10inFitzergald c08.tex V2 - July 25, 2014 10:07 A.M. Page 228228 Chapter 8 Backbone Networks messages stay in the one network segment (i.e., subnet) where they belong and are not sent to all computers. This leads to a more efficient network. Each set of LANs is usually a separate entity, relatively isolated from the rest of the network. There is no requirement that all LANs share the same technologies. Each set of LANs can contain its own server designed to support the users on that LAN, but users can still easily access servers on other LANs over the backbone, as needed.A Day in the Life: Network Operations Manager The job of the network operations manager is to ensure that the network operates effectively. The operations manager typically has several network administrators and network managers that report to him or her and is responsible for both day-to-day operations and long-term planning for the network. The challenge is to balance daily firefighting with longer-term planning; they’re always looking for a better way to do things. Network operations managers also meet with users to ensure their needs are met. While network technicians deal primarily with networking technology, a network operations manager deals extensively with both technology and the users. A typical day starts with administrative work that includes checks on all servers and backup processes to ensure that they are working properly and that there are no security issues. Then it’s on to planning. One typical planning item includes planning for the acquisition of new desktop or laptop computers, including meeting with vendors to discuss pricing, testing new hardware and software, and validating new standard configurations for computers. Other planning is done around network upgrades, such as tracking historical data to monitor network usage, projecting future user needs, surveying user requirements, testing new hardware and software, and actually planning the implementation of new network resources. One recent example of long-term planning was the migration from a Novell file server to Microsoft ADS file services. The first step was problem definition; what were the goals and the alternatives? The key driving force behind the decision to migrate was to make it simpler for the users (e.g., now the users do not need to have different accounts with different passwords) and to make it simpler for the network staff to provide technical support (e.g., now there is one less type of network software to support). The next step was to determine the migration strategy: a Big Bang (i.e., the entire network at once) or a phased implementation (several groups of users at a time). The migration required a technician to access each individual user’s computer, so it was impossible to do a Big Bang. The next step was to design a migration procedure and schedule whereby groups of users could be moved at a time (e.g., department by department). A detailed set of procedures and a checklist for network technicians were developed and extensively tested. Then each department was migrated on a 1-week schedule. One key issue was revising the procedures and checklist to account for unexpected occurrences during the migration to ensure that no data were lost. Another key issue was managing user relationships and dealing with user resistance. Source: With thanks to Mark Ross.The primary advantage of the routed backbone is that it clearly segments each part of the network connected to the backbone. Each segment (usually a set of LANs or switched backbone) has its own subnet addresses that can be managed by a different network manager. Broadcast messages stay within each subnet and do not move to other parts of the network.Trimsize Trim Size: 8in x 10inFitzergald c08.tex V2 - July 25, 2014 10:07 A.M. Page 229Virtual LANs 229There are two primary disadvantages to routed backbones. First, the routers in the network impose time delays. Routing takes more time than switching, so routed networks can sometimes be slower. Second, routers are more expensive and require more management than switches. Figure 8-5 shows one core router. Many organizations actually use two core routers to provide better security, as we discuss in Chapter 11.8.4 VIRTUAL LANs For many years, the design of LANs remained relatively constant. However, in recent years, the introduction of high-speed switches has begun to change the way we think about LANs. Switches offer the opportunity to design radically new types of LANs. Most large organizations today have implemented the virtual LAN (VLAN), a new type of LAN-BN architecture made possible by intelligent, high-speed switches. Virtual LANs are networks in which computers are assigned to LAN segments by software rather than by hardware. In the first section, we described how in rack-mounted collapsed BNs a computer could be moved from one hub to another by unplugging its cable and plugging it into a different hub. VLANs provide the same capability via software so that the network manager does not have to unplug and replug physical cables to move computers from one segment to another. Often, VLANs are faster and provide greater opportunities to manage the flow of traffic on the LAN and BN than do the traditional LAN and routed BN architectures. However, VLANs are significantly more complex, so they usually are used only for large networks. The simplest example is a single-switch VLAN, which means that the VLAN operates only inside one switch. The computers on the VLAN are connected into the one switch and assigned by software into different VLANs (Figure 8-6). The network manager uses special software to assign the dozens or even hundreds of computers attached to the switch to different VLAN segments. The VLAN segments function in the same way as physical LAN segments or subnets; the computers in the same VLAN act as though they are connected to the same physical switch or hub in a certain subnet. Because VLAN switches can create multiple subnets, they act like routers, except the subnets are inside the switch, not between switches. Therefore, broadcast messages sent by computers in one VLAN segment are sent only to the computers on the same VLAN. Virtual LANs can be designed so that they act as though computers are connected via hubs (i.e., several computers share a given capacity and must take turns using it) or via switches (i.e., all computers in the VLAN can transmit simultaneously). Although switched circuits are preferred to the shared circuits of hubs, VLAN switches with the capacity to provide a complete set of switched circuits for hundreds of computers are more expensive than those that permit shared circuits. We should also note that it is possible to have just one computer in a given VLAN. In this case, that computer has a dedicated connection and does not need to share the network capacity with any other computer. This is commonly done for servers. Benefits of VLANs Historically, we have assigned computers to subnets based on geographic location; all computers in one part of a building have been placed in the same subnet. With VLANs, we can put computers in different geographic locations in the same subnet. For example, in Figure 8-6, a computer in the lower left could be put on the same subnet as one in the upper right—a separate subnet from all the other computers. A more common implementation is a multiswitch VLAN, in which several switches are used to build the VLANs (Figure 8-7). VLANs are most commonly found in building backbone networks (i.e., access and distribution layers) but are starting to move into coreTrimsize Trim Size: 8in x 10inFitzergald c08.tex V2 - July 25, 2014 10:07 A.M. Page 230230 Chapter 8 Backbone NetworksSwitchFIGURE 8-6VLAN-based backbone network architecturebackbones between buildings. In this case, we can now create subnets that span buildings. For example, we could put one of the computers in the upper left of Figure 8-7 in the same subnet as the computers in the lower right, which could be in a completely different building. This enables us to create subnets based on who you are, rather than on where you are; we have an accounting subnet and a marketing subnet, not a Building A and a Building B subnet. We now manage security and network capacity by who you are, not by where your computer is. Because we have several subnets, we need to have a router—but more on that shortly. Virtual LANs offer two other major advantages compared to the other network architectures. The first lies in their ability to manage the flow of traffic on the LAN and backbone very precisely. VLANs make it much simpler to manage the broadcast traffic, which has the potential to reduce performance and to allocate resources to different types of traffic moreTrimsize Trim Size: 8in x 10inFitzergald c08.tex V2 - July 25, 2014 10:07 A.M. Page 231Virtual LANs 231 RouterVLAN ID: 10 IP: 179.58.10.101VLAN ID: 10 IP: 179.58.10.102VLAN ID: 20; IP: 179.58.7.1 VLAN ID: 10; IP: 179.58.10.1 VLAN ID: 30; IP: 179.58.11.1VLAN ID: 10 IP: 179.58.10.103 VLAN ID: 30 IP: 179.58.7.30VLAN Switch 1VLAN Switch 2VLAN ID: 20 IP: 179.58.11.20VLAN Switch 3VLAN ID: 10 IP: 179.58.10.50FIGURE 8-7 Multiswitch VLAN-based backbone network designprecisely. The bottom line is that VLANs often provide faster performance than the other backbone architectures. The second advantage is the ability to prioritize traffic. The VLAN tag information included in the Ethernet packet defines the VLAN to which the packet belongs and also specifies a priority code based on the IEEE 802.1q standard (see Chapter 4). As you will recall from Chapter 5, the network and transport layers can use RSVP quality of service (QoS), which enables them to prioritize traffic using different classes of service. RSVP is most effective when combined with QoS capabilities at the data link layer. (Without QoS at the hardware layers, the devices that operate at the hardware layers [e.g., layer-2 switches] would ignore QoS information.) With the Ethernet packet’s ability to carry VLAN information that includes priorities, we now have QoS capabilities in the data link layer. This means we can connect VOIP telephones directly into a VLAN switch and configure the switch to reserve sufficient network capacity so that they will always be able to send and receive voice messages. The biggest drawbacks to VLANs are their cost and management complexity. VLAN switches also are much newer technologies that have only recently been standardized. Such “leading-edge” technologies sometimes introduce other problems that disappear only after the specific products have matured. How VLANs Work VLANs work somewhat differently than the traditional Ethernet/IP approach described in the previous chapters. Each computer is assigned into a specific VLAN that has a VLAN ID number (which ranges from 1 to 1,005 or to 4,094, depending on whether the extended range standard is used). Each VLAN ID is matched to a traditional IP subnet, so each computer connected to a VLAN switch also receives a traditional IP address assigned by the VLAN switch (the switch acts as a DHCP server; see Chapter 5). Most VLAN switches can support only 255 separate VLANs simultaneously, which means each switch can support up to 255 separate IP subnets, which is far larger than most organizations want in any single device.Trimsize Trim Size: 8in x 10inFitzergald c08.tex V2 - July 25, 2014 10:07 A.M. Page 232232 Chapter 8 Backbone Networks sss MANAGEMENT8-2 VLANs IN SHANGRI-LAFOCUSShangri-La’s Rasa Sayang Resort and Spa is a five-star luxury resort hotel located on the scenic Batu Feringgi Beach in Penang, Malaysia. The resort has two main buildings, the 189-room Garden Wing and the 115-room Rasa Wing, with an additional 11 private spa villas. Over the years, the resort had installed three separate networks: one for the resort’s operations, one for its POS (point-of-sales) system, and one for Internet access for guests (which was wired, not wireless). The networks were separate to ensure security, so that users of one network could not gain access to another. As part of a multi-million-dollar renovation, the resort decided to upgrade its network to gigabit speeds and tooffer wireless Internet access to its guests. Rather than build three separate networks again, it decided to build one network using VLANs. The resort installed 12 wireless access points and 24 VLAN switches, plus two larger core VLAN switches. The VLAN architecture provides seamless management of the wired and wireless components as one integrated network and ensures robust performance and security. Adapted from: “Wireless Access amidst Lush Greenery of Penang Shangri-La’s Resort,” HP ProCurve Customer Case Study, Hewlett-Packard, 2010.Computers are assigned into the VLAN (and the matching IP subnet) based on the physical port on the switch into which they are connected.1 Don’t confuse the physical port on the switch (which is the jack the cable plugs into) with the TCP port number from Chapter 5; they are different—it’s another example of networking using the same word (“port”) to mean two different things. The network manager uses software to assign the computers to specific VLANs using their physical port numbers, so it is simple to move a computer from one VLAN to another. When a computer transmits an Ethernet frame, it uses the traditional Ethernet and IP addresses we discussed in previous chapters (e.g., Chapters 4 and 5) to move the frame through the network because it doesn’t know that it is attached to a VLAN switch. Recall that as a message moves through the network, the IP address is used to specify the final destination and the Ethernet address is used to move the message from one computer to the next along the route to the final destination. Some devices, such as layer-2 switches, are transparent; the Ethernet frame passes through them unchanged. Other devices, such as routers, remove the Ethernet frame and create a new Ethernet frame to send the message to the next computer. VLANs are transparent—although they do change the frame at times. Let’s use Figure 8-7 to explain how VLAN switches work. We’ll assume this network uses the first 3 bytes to specify the IP subnet. In this example, we have three VLAN switches with three IP subnets (179.58.10.x, 179.58.3.x, and 179.58.11.x) and three VLANs (10, 20, and 30). A router is used to enable communication among the different IP subnets. Suppose a computer connected to switch 2 (IP 179.58.10.102) sends a message to a computer on the same IP subnet that is also connected to switch 2 (IP 179.58.10.103). The sending computer will recognize that the destination computer is in the same IP subnet, create an Ethernet frame with the destination computer’s Ethernet address (using ARP if needed to find the Ethernet address), and transmit the frame to VLAN switch 2. When a VLAN switch receives a frame that is destined for another computer in the same subnet on the same VLAN switch, the switch acts as a traditional layer-2 switch: it forwards the frame unchanged to the correct computer. Remember from Chapter 7 that switches build a forwarding table that 1 Onetype of VLAN switch used to enable computers to be assigned into VLANs is based on dynamic criteria such as Ethernet address, but this type of switch has essentially disappeared. The extra cost of dynamic VLAN switches outweighed the benefits they provided, and they lost in the marketplace.Trimsize Trim Size: 8in x 10inFitzergald c08.tex V2 - July 25, 2014 10:07 A.M. Page 233Virtual LANs 233lists the Ethernet address of every computer connected to the switch. When a frame arrives at the switch, the switch looks up the Ethernet address in the forwarding table, and if it finds the address, then it forwards the frame to the correct computer. We discuss what happens if the Ethernet address is not in the forwarding table in a moment. Suppose that a computer wants to send a message to a computer in the same subnet, but that the destination computer is actually on a different VLAN switch. For example, in Figure 8-7, suppose this same computer (IP 179.58.10.102) sends a message to a computer on switch 3 (179.58.10.50). The sending computer will act exactly the same because to it, the situation is the same. It doesn’t know where the destination computer is; it just knows that the destination is on its own subnet. The sending computer will create an Ethernet frame with the destination computer’s Ethernet address (using ARP if needed to find the Ethernet address) and transmit the frame to VLAN switch 2. Switch 2 receives the frame, looks up the destination Ethernet address in its forwarding table, and recognizes that the frame needs to go to switch 3. Virtual LAN switches use Ethernet 802.1q tagging to move frames from one switch to another. Chapter 4 showed that the layout of an Ethernet frame contains a VLAN tag field which VLAN switches use to move frames among switches. When a VLAN switch receives an Ethernet frame that needs to go to a computer on another VLAN switch, it changes the Ethernet frame by inserting the VLAN ID number and a priority code into the VLAN tag field. When a switch is configured, the network administrator defines which VLANs span which switches and also defines VLAN trunks—circuits that connect two VLAN switches and enable traffic to flow from one switch to another. As a switch builds its forwarding table, it receives information from other switches and inserts the Ethernet addresses of computers attached to them into its forwarding table along with the correct trunk to use to send frames to them. In this case, switch 2 receives the frame and uses the forwarding table to identify that it needs to send the frame over the trunk to switch 3. It changes the frame by inserting the VLAN ID and priority code into the tag field and transmits the frame over the trunk to switch 3. Switch 3 receives the frame, looks the Ethernet address up in its forwarding table, and identifies the specific computer to which the frame needs to be sent. The switch removes the VLAN tag information and transmits the revised frame to the destination computer. In this way, neither the sending computer nor the destination computer is aware that the VLAN exists. The VLAN is transparent. Suppose the same sending computer (179.58.10.102) wants to send a message to a computer on a different subnet in the same VLAN (e.g., 179.58.7.30 on the same switch or 179.58.11.20 on switch 3). The sending computer recognizes that the destination is on a different subnet and therefore creates an Ethernet frame with a destination Ethernet address of its router (179.58.10.1) and sends the frame to switch 2. At this point, everything works the same as in the previous example. Switch 2 looks up the destination Ethernet address in its forwarding table and recognizes that the frame needs to go to switch 1 because the router’s Ethernet address is listed in the forwarding table as being reachable through switch 1. Switch 2 sets the VLAN tag information and sends the frame over the trunk to switch 1. Switch 1 looks up the destination Ethernet address in its forwarding table and sees that the router is attached to it. Switch 2 removes the VLAN tag field and sends the frame to the router. The router is a layer-3 device, so when it receives the message, it strips off the Ethernet frame and reads the IP packet. It looks in its routing table and sees that the destination IP address is within a subnet it controls (either 179.58.7.x or 179.58.11.x, depending on to which destination computer the packet was sent). The router creates a new Ethernet frame and sets the destination Ethernet address to the destination computer (using an ARP if needed) and sends the frame to switch 1. Switch 1 reads the Ethernet address and looks it up in its forwarding table. It discovers the frame needs to go to switch 2 (for 179.58.7.30) or switch 3 (for 179.58.11.20), sets theTrimsize Trim Size: 8in x 10inFitzergald c08.tex V2 - July 25, 2014 10:07 A.M. Page 234234 Chapter 8 Backbone Networks VLAN tag field, and forwards the frame over the trunk to the correct switch. This switch in turn removes the VLAN tag information and sends the frame to the correct computer. Until now, we’ve been talking about unicast messages—messages from one computer to another—that are the majority of network traffic. However, what about broadcast messages, such as ARPs, that are sent to all computers in the same subnet? Each computer on a VLAN switch is assigned into a subnet with a matching VLAN ID. When a computer issues a broadcast message, the switch identifies the VLAN ID of the sending computer and then sends the frame to all other computers that have the same VLAN ID. These computers may be on the same switch or on different switches. For example, suppose computer 179.58.10.102 issues an ARP to find an Ethernet address (e.g., the router’s address). Switch 2 would send the broadcast frame to all attached computers with the same VLAN ID (e.g., 179.58.10.103). Switch 2’s trunking information also tells it that VLAN 10 spans switch 1 and switch 3, so it sends the frame to them. They, in turn, use their tables to send it to their attached computers that are in the same VLAN (which includes the router). Note that the router has multiple IP addresses and VLAN IDs because it is connected to several different VLANs and subnets (three, in our example here). We have also assumed that the VLAN switch has a complete forwarding table—a table that lists all the Ethernet addresses of all the computers in the network. Just like a layer-2 switch, the VLAN switch learns Ethernet addresses as it sends and receives messages. Where the VLAN switch is first turned on, the forwarding table is empty, just like the forwarding table of a layer-2 switch; however, its VLAN ID and trunk tables are complete because these are defined by the network administrator. Suppose the switch has just been turned on and has an empty forwarding table. It receives an Ethernet frame, looks up the destination address in the forwarding table, and does not find where to send it. What happens? If the VLAN switch were a layer-2 switch, it would send the frame to all ports. However, a VLAN switch can be a bit smarter than this. If you think about how IP works, you will see that an Ethernet frame is always sent to a computer in the same IP subnet as the sending computer. Any time a frame needs to move to a different subnet, it goes through a router which sits on both subnets. Think about it for a minute before you continue reading. Therefore, any time the VLAN switch can’t find a destination Ethernet address in the forwarding table, it treats the frame as a broadcast frame and sends it to all the computers in the same subnet, which in VLAN terms means all the computers with the same VLAN ID. This means that a VLAN architecture can improve performance by reducing traffic in the network compared with a switched backbone architecture. Because a switched backbone uses layer-2 switches, all the computers are in the same subnet, and all broadcast traffic goes to all computers. By using a VLAN we can limit where broadcast traffic flows by dividing the network into separate subnets, so that broadcast messages only go to computers in the same subnet.8.5 THE BEST PRACTICE BACKBONE DESIGN The past few years have seen radical changes in the backbone, both in terms of new technologies (e.g., gigabit Ethernet) and in architectures (e.g., VLANs). Fifteen years ago, the most common backbone architecture was the routed backbone, connected to a series of shared 10Base-T hubs in the LAN. Today, the most effective architecture for the distribution layer in terms of cost and performance is a switched backbone (either rack-mounted or using a chassis switch) because it provides the best performance at the least cost. For the core layer, most organizations use a routed backbone. Many large organizations are now implementing VLANs, especially thoseTrimsize Trim Size: 8in x 10inFitzergald c08.tex V2 - July 25, 2014 10:07 A.M. Page 235The Best Practice Backbone Design 235 BuildingBuildingBuilding Layer-2 SwitchAccessLayer-2 o r VLAN SwitchDistributionCoreRouted or VLAN SwitchingFIGURE 8-8 The best practice network designthat have departments spread over multiple buildings, but VLANs add considerable cost and complexity to the network. Given the trade-offs in costs, there are several best practice recommendations. First, the best practice architecture is a switched backbone or VLAN for the distribution layer and a routed backbone for the core layer. Second, the best practice recommendation for backbone technology is gigabit Ethernet. Considering the LAN and backbone environments together, the ideal network design is likely to be a mix of layer-2 and VLAN Ethernet switches. Figure 8-8 shows one likely design. The access layer (i.e., the LANs) uses 1000Base-T layer-2 Ethernet switches running on Cat 5e or Cat 6 twisted-pair cables to provide flexibility for 100Base-T or 1000Base-T. The distribution layer uses layer-2 or VLAN switches that use 100Base-T or more commonly 1000Base-T/F (over fiber or Cat 6) to connect to the access layer. To provide good reliability, some organizations may provide redundant switches, so if one fails, the backbone continues to operate. The core layer uses routers or VLAN Ethernet switches running 10 GbE or 40 GbE over fiber.TECHNICAL8-1 Multiprotocol Label SwitchingFOCUS Multiprotocol Label Switching (MPLS) is an approach to improving QoS and the movement of packets with different layer-2 protocols through TCP/IP networks. With MPLS, routers called Label Switched Routers (LSRs) are used. The network manager defines a series of Forwarding Equivalence Classes (FEC) through the network of LSRs. Each FEC has a reserved data rate and a QoS. When a packet arrives at the edge of the MPLS network, an edge LSR reads the destination address on the incoming packet. The edge LSR can be configured to use the IP address, the IP address and the source or destination port, or the address in any protocol understood by the LSR. The edge LSR accepts the incoming packet and attaches an MPLS label (a packet that contains the FEC address). The edge LSR then forwards the packet to the next LSR as defined in the FEC.This LSR reads the MPLS label and removes it from the incoming packet, consults its MPLS address table to find the packet’s next destination, attaches a new MPLS label with the new FEC address, and forwards the packet to the next LSR in the FEC. This process continues until the packet reaches the edge LSR closest to its final destination. This edge LSR strips off the MPLS label and forwards the packet outside of the MPLS network in exactly the same format in which it entered the MPLS network. The advantage of MPLS is that it can easily integrate layer-2 protocols and also provide QoS in an IP environment. It also enables traffic management by enabling the network manager to specify FEC based on both the IP address and the source or destination port.Trimsize Trim Size: 8in x 10inFitzergald c08.tex V2 - July 25, 2014 10:07 A.M. Page 236236 Chapter 8 Backbone Networks8.6 IMPROVING BACKBONE PERFORMANCE The method for improving the performance of BNs is similar to that for improving LAN performance. First, find the bottleneck, then remove it (or, more accurately, move the bottleneck somewhere else). You can improve the performance of the network by improving the performance of the devices in the network, by upgrading the circuits between them, and by changing the demand placed on the network (Figure 8-9).8.6.1 Improving Device Performance The primary functions of computers and devices in BNs are forwarding/routing messages and serving up content. If the devices and computers are the bottleneck, routing can be improved with faster devices or a faster routing protocol. Distance vector routing is faster than dynamic routing (see Chapter 5) but obviously can impair circuit performance in high-traffic situations. Link state routing is usually used in WANs because there are many possible routes through the network. BNs often have only a few routes through the network, so link state routing may not be too helpful because it will delay processing and increase the network traffic because of the status reports sent through the network. Distance vector routing will often simplify processing and improve performance. Most backbone devices are store-and-forward devices. One simple way to improve performance is to ensure that they have sufficient memory. If they don’t, the devices will lose packets, requiring them to be retransmitted.8.6.2 Improving Circuit Capacity If network circuits are the bottlenecks, there are several options. One is to increase circuit capacity (e.g., by going from 100Base-T Ethernet to gigabit Ethernet). Another option is to add additional circuits alongside heavily used ones so that there are several circuits between some devices. In many cases, the bottleneck on the circuit is only in one place—the circuit to the server. A switched network that provides 100 Mbps to the client computers but a faster circuit to the server (e.g., 1000Base-T) can improve performance at very little cost.8.6.3 Reducing Network Demand One way to reduce network demand is to restrict applications that use a lot of network capacity, such as desktop videoconferencing, medical imaging, or multimedia. In practice, it is often difficult to restrict users. Nonetheless, finding one application that places a large demand on the network and moving it can have a significant impact. FIGURE 8-9 Facility map of the Western Trucking headquartersPerformance Checklist Increase Device Performance • Change to a more appropriate routing protocol (either distance vector or link state) • Increase the devices’ memory Increase Circuit Capacity • Upgrade to a faster circuit • Add circuits Reduce Network Demand • Change user behavior • Reduce broadcast messagesTrimsize Trim Size: 8in x 10inFitzergald c08.tex V2 - July 25, 2014 10:07 A.M. Page 237Summary 237Much network demand is caused by broadcast messages, such as those used to find data link layer addresses (see Chapter 5). Some application software packages and NOS modules written for use on LANs also use broadcast messages to send status information to all computers on the LAN. For example, broadcast messages inform users when printers are out of paper or when the server is running low on disk space. When used in a LAN, such messages place little extra demand on the network because every computer on the LAN gets every message. This is not the case for routed backbones because messages do not normally flow to all computers, but broadcast messages can consume a fair amount of network capacity in switched backbones. In many cases, broadcast messages have little value outside their individual LAN. Therefore, some switches and routers can be set to filter broadcast messages so that they do not go to other networks. This reduces network traffic and improves performance.8.7 IMPLICATIONS FOR MANAGEMENT As the technologies used in LANs and WLANs become faster and better, the amount of traffic the backbone network needs to support is increasing at an even faster rate. Coupled with the significant changes in the best practice recommendations for the design of backbone networks, this means that many organizations have had to replace their backbones. We would like to think that these have been one-time expenditures, but, as traffic grows, demand placed on the backbone will continue to increase, meaning the amount spent on switches and routers for use in the backbone will increase. Designing backbone networks to be easily upgradable is now an important management goal. As Ethernet moves more extensively into the backbone, the costs associated with buying and maintaining backbone devices and training networking staff will decrease, because now there will be one standard technology in use throughout the LAN, WLAN, and backbone. The new focus is on faster and faster versions of Ethernet. Although we will spend more on new equipment, performance will increase much more quickly, and the cost to operate the equipment will decrease.SUMMARYSwitched Backbones These use the same layer-2 switches as LANs to connect the different LANs together. The switches are usually placed in a rack in the same room (called an IDF or MDF) to make them easy to maintain.Routed Backbones These use routers to connect the different LANs or subnets. Routed backbones are slower than switched backbones, but they prevent broadcast traffic from moving between the different parts of the network.VLAN Backbones These combine the best features of switched and routed backbones. They are very complex and expensive, so they are mostly used by large companies.Best Practice Backbone Design The best practice backbone architecture for most organizations is a switched backbone (using a rack or a chassis switch) or VLAN in the distribution layer and a routed backbone in the core layer. The recommended technology is gigabit Ethernet.Improving Backbone Performance Backbone performance can be improved by choosing the best network layer routing protocols. Upgrading to faster circuits and adding additional circuits on very busy backbones can also improve performance. Finally, one could move servers closer to the end users or reduce broadcast traffic to reduce backbone traffic.Trimsize Trim Size: 8in x 10inFitzergald c08.tex V2 - July 25, 2014 10:07 A.M. Page 238238 Chapter 8 Backbone NetworksKEY TERMS chassis switch, 224 forwarding equivalence class FEC, 235 IEEE 802.1q, 231 label switched router (LSR), 235layer-2 switch, 222 main distribution facility (MDF), 223 module, 224 multiprotocol label switching (MPLS), 235multiswitch VLAN, 229 patch cables, 223 rack, 223 routed backbone, 226 router, 223 single-switch VLAN, 229switched backbone, 223 virtual LAN (VLAN), 229 VLAN ID, 231 VLAN switch, 223 VLAN tag, 233 VLAN trunk, 233QUESTIONS 1. 2. 3. 4. 5. 6. 7. 8.9.How does a layer-2 switch differ from a router? How does a layer-2 switch differ from a VLAN? How does a router differ from a VLAN? Under what circumstances would you use a switched backbone? Under what circumstances would you use a routed backbone? Under what circumstances would you use a VLAN backbone? Explain how routed backbones work. In Figure 8.5, would the network still work if we removed the routers in each building and just had one core router? What would be the advantages and disadvantages of doing this? Explain how switched backbones work.10. What are the key advantages and disadvantages of routed and switched backbones? 11. Compare and contrast rack-based and chassis-based switched backbones. 12. What is a module and why are modules important? 13. Explain how single-switch VLANs work. 14. Explain how multiswitch VLANs work. 15. What is IEEE 802.1q? 16. What are the advantages and disadvantages of VLANs? 17. How can you improve the performance of a BN? 18. Why are broadcast messages important? 19. What are the preferred architectures used in each part of the backbone? 20. Some experts are predicting that Ethernet will move into the WAN. What do you think?EXERCISES A. Survey the BNs used in your organization. Is the campus core backbone different from the distribution backbones used in the buildings? Why? B. Document one BN in detail. What devices are attached, what cabling is used, and what is the topology? What networks does the backbone connect? C. You have been hired by a small company to install a backbone to connect four 100base-T Ethernet LANs (each using one 24-port hub) and to provide a connection to the Internet. Develop a simple backboneand determine the total cost (i.e., select the backbone technology and price it, select the cabling and price it, select the devices and price them, and so on). Prices are available at www.datacommwarehouse.com, but use any source that is convenient. For simplicity, assume that category 5, category 5e, category 6, and fiber-optic cable have a fixed cost per circuit to buy and install, regardless of distance, of $50, $60, $120, and $300, respectively.MINICASES I. Pat’s Engineering Works Pat’s Engineering Works is a small company that specializes in complex engineering consulting projects. The projects typically involve one or two engineers who dos ComputerIPTCPHTTPEthernetIPUDPESPIPTCPHTTPVPN Tunnel PPPIPEmployee's RouterUDPESPPPPIPIPTCPUDPHTTPESPIPTCPHTTPUDPESPIPEthernetIPUDPESPEthernetIPISPPPP InternetIPTCPHTTPISPOffice RouterOffice VPN GatewayOffice NetworkIP TCPEthernetTCPHTTPHTTPIPTCPHTTPWeb ServerFIGURE 9-9 Using VPN softwareand uses it to log in to the VPN gateway at the office. The VPN software creates a new “interface” on the employee’s computer that acts exactly like a separate connection into the Internet. Interfaces are usually hardware connections, but the VPN is a software interface, although the employee’s computer doesn’t know this—it’s just another interface. Computers can have multiple interfaces; a laptop computer often has two interfaces, one for wire Ethernet and one for wireless Wi-Fi. The VPN gateway at the office is also a router and a DCHP server. The VPN gateway assigns an IP address to the VPN interface on the employee’s computer that is an IP address in a subnet managed by the VPN gateway. For example, if the VPN gateway has an IP address of 156.56.198.1 and managed the 156.56.198.x subnet, it would assign an IP address in this subnet domain (e.g., 156.56.198.55). The employee’s computer now thinks it has two connections to the Internet: the traditional interface that has the computer’s usual IP address and the VPN interface that has an IP address assigned by the VPN gateway. The VPN software on the employee’s computer makes the VPN interface the default interface for all network traffic to and from the Internet, which ensures that all messages leaving the employee’s computer flow through the VPN interface to the VPN gateway at the office. Suppose the employee sends an HTTP request to a Web server at the office (or somewhere else on the Internet). The Web browser software will create an HTTP packet that is passed to the TCP software (which adds a TCP segment), and this in turn is passed to the IPTrimsize Trim Size: 8in x 10inFitzergald c09.tex V2 - July 2, 2014 8:59 P.M. Page 260260 Chapter 9 Wide Area Networks software managing the VPN interface. The IP software creates the IP packet using the source IP address assigned by the VPN gateway. Normally, the IP software would then pass the IP packet to the Ethernet software that manages the Ethernet interface into the employee’s LAN, but because the IP packet is being sent out the VPN interface, the IP packet is passed to the VPN software managing the VPN interface. Figure 9-9 shows the message as it leaves the network software and is passed to the VPN for transmission: an HTTP packet, surrounded by a TCP segment, surrounded by an IP packet. The VPN software receives the IP packet, encrypts it, and encapsulates it (and its contents: the TCP segment and the HTTP packet) with an Encapsulating Security Payload (ESP) packet using IPSec encryption. The contents of the ESP packet (the IP packet, the TCP segment, and the HTTP packet) are encrypted so that no one except the VPN gateway at the office can read them. You can think of the IPSec packet as an application layer packet whose destination is the office VPN gateway. How do we send an application layer packet over the Internet? Well, we pass it to the TCP software, which is exactly what the VPN software does. The VPN software passes the ESP packet (and its encrypted contents) to the employee’s computer normal Internet interface for transmission. This interface has been sitting around waiting for transmissions, but because the VPN interface is defined as the primary interface to use, it has received no messages to transfer except those from the VPN software. This interface treats the ESP packet as an application layer packet that needs to be sent to the VPN gateway at the office. It attaches a transport layer packet (a UDP datagram in this case, not a TCP segment). It then passes the ESP packet to the IP software, which creates an IP packet with an IP destination address of the VPN gateway at the office and a source IP of the employee’s computer’s normal Internet interface. It passes this IP packet to the Ethernet software, which adds an Ethernet frame and transmits it to the employee’s router. The employee’s router receives the Ethernet frame, strips off the frame, and reads the IP packet. It sees that the packet needs to be sent to the VPN gateway at the office, which means sending the packet to the employee’s ISP over the DSL circuit. Because DSL uses PPP as its layer-2 protocol, it adds a PPP frame and sends the packet over the DSL circuit to the ISP. The router at the ISP strips off the PPP frame and reads the IP packet, which it uses to route the packet through the Internet. As the packet moves over the Internet, the layer-2 frame changes at each hop, depending on the circuit in use. For example, if the ISP uses a T3 circuit, then the ISP creates an appropriate layer-2 frame to move the packet over the T3 circuit (which usually is a PPP frame). The packet travels from the Internet to the ISP that connects the office to the Internet and arrives at the office’s router. This router will strip off the incoming layer-2 frame (suppose the office uses a T-3 connection with PPP as shown in the figure), read the IP packet, and create an Ethernet frame that will send the packet to the office VPN gateway. The VPN gateway will strip off the Ethernet frame, read the IP packet, strip it off, read the UDP datagram, strip it off, and hand the ESP packet to its VPN software. The VPN gateway’s software will decrypt the ESP packet and deencapsulate the IP packet (and the TCP segment and HTTP packet it contains) from the ESP packet. The VPN gateway now has the IP packet (and the TCP segment and HTTP packet) that was originally created by the software on the employee’s computer. The VPN gateway reads this IP packet and creates an Ethernet frame to send it on the next hop to its destination and transmits it into the office network, where it ultimately reaches the Web server. On this last leg of the journey after it leaves the VPN gateway, the packet is not encrypted and can be read like a normal packet on the Internet. The return path from the Web server back to the employee’s computer is very similar. The Web server will process the HTTP request packet and create an HTTP response packet that it sends back to the employee’s computer. The source address on the IP packet that the Web server received was the IP address associated with the VPN interface on the employee’sTrimsize Trim Size: 8in x 10inFitzergald c09.tex V2 - July 2, 2014 8:59 P.M. Page 261The Best Practice WAN Design 261computer, so the Web server uses this address as the destination IP address. This packet is therefore routed back to the VPN gateway, because the subnet for this IP address is defined as being in the subnet that the VPN gateway manages. Once again, the return packet is not encrypted on this part of the journey. When the packet arrives at the VPN gateway, it looks up the VPN IP address in its table and sees the usual IP address of the computer associated with that VPN address. The VPN gateway creates an ESP packet and encrypts the IP packet from the Web server (and the TCP segment and HTTP packet it contains). It then treats the ESP packet as an application layer packet that needs to be sent to the VPN software on the employee’s computer; it passes it to its TCP software for a UDP datagram, then to its IP software for an IP packet, and then to its Ethernet software for an Ethernet frame and transmission back through the VPN tunnel. When the packet eventually reaches the employee’s computer, it comes in the normal Internet interface and eventually reaches the TCP software that strips off the UDP datagram. The TCP software sees that the ESP packet inside the UDP datagram is destined for the VPN software (remember that TCP port numbers are used to identify to which application layer software a packet should go). The VPN software removes the ESP packet and passes the IP packet it contains to the IP software, which in turn strips off the IP packet, and passes the TCP segment it contains to the TCP software, which strips off the TCP segments and passes the HTTP packet it contains to the Web browser.9.5 THE BEST PRACTICE WAN DESIGN Developing best practice recommendations for WAN design is more difficult than for LANs and backbones because the network designer is buying services from different companies rather than buying products. The relatively stable environment enjoyed by the WAN common carriers is facing sharp challenges by VPNs at the low end and Ethernet and MPLS services at the high end. As larger IT and equipment firms enter the VPN and Ethernet services markets, we should see some major changes in the industry and in the available services and costs. We also need to point out that the technologies in this chapter are primarily used to connect different corporate locations. Technologies primarily used for Internet access (e.g., DSL and cable modem) are discussed in the next chapter. We use the same two factors as we have previously for LANs and backbones (effective data rates and cost), plus add one additional factor: reliability. Figure 9-10 summarizes the major services available today for the WAN, grouped by the type of service. A few patterns should emerge from the table. For small WANs with lowFIGURE 9-10 WAN servicesType of Service Dedicated-Circuit Services T Carrier SONET Packet-Switched Services Frame Relay Ethernet MPLS IP VPN Services VPNData RatesRelative Cost64 Kbps to 45 Mbps 50 Mbps to 10 GbpsModerate HighHigh High64 Kbps to 45 Mbps 1 Mbps to 40 GbpsModerate ModerateHigh High64 Kbps to 10 Gbps 64 Kbps to 1 GbpsModerate ModerateHigh High64 Kbps to 50 MbpsLowModerateReliabilityTrimsize Trim Size: 8in x 10inFitzergald c09.tex V2 - July 2, 2014 8:59 P.M. Page 262262 Chapter 9 Wide Area Networks FIGURE 9-11 Best practice WAN recommendationsNetwork NeedsRecommendationLow to Moderate Traffic (10 Mbps or less)VPN if reliability is less important Frame relay otherwiseHigh Traffic (10–50 Mbps)Ethernet, IP, or MPLS if available T3 if network volume is stable and predictable Frame relay otherwiseVery High Traffic (50 Mbps to 100 Gbps)Ethernet, IP, or MPLS if available SONET if network volume is stable and predictableto moderate data transmission needs, VPN services are a good alternative, provided the lack of reliability is not a major issue. Otherwise, frame relay is a good choice. See Figure 9-11. For networks with high data transmission needs (10–50 Mbps) there are several distinct choices. If cost is more important than reliability, then a VPN is a possible choice. If you need flexibility in the location of your network connections and you are not completely sure of the volume of traffic you will have between locations, frame relay, IP, or MPLS are good choices. If you have a mature network with predictable demands, then T3 is probably a good choice. For very-high-traffic networks (50 Mbps to 100 Gbps), Ethernet or MPLS services are a dominant choice. And again, some organizations may prefer the more mature SONET services, depending on whether the greater flexibility of packet services provides value or a dedicated circuit makes more sense. Unless their data needs are stable, network managers often start with more flexible packet-switched services and move to the usually cheaper dedicated-circuit services once their needs have become clear and an investment in dedicated services is safer. Some packet-switched services even permit organizations to establish circuits with a zero-CIR (and rely entirely on the availability of the MAR) so network managers can track their needs and lease only what they need. Network managers often add a packet network service as an overlay network on top of a network built with dedicated circuits to handle peak data needs; data usually travel over the dedicated-circuit network, but when it becomes overloaded with traffic, the extra traffic is routed to the packet network.9.6 IMPROVING WAN PERFORMANCE Improving the performance of WANs is handled in the same way as improving LAN performance. You begin by checking the devices in the network, by upgrading the circuits between the locations, and by changing the demand placed on the network (Figure 9-12).9.6.1 Improving Device Performance In some cases, the key bottleneck in the network is not the circuits; it is the devices that provide access to the circuits (e.g., routers). One way to improve network performance is to upgrade the devices and computers that connect backbones to the WAN. Most devices are rated for their speed in converting input packets to output packets (called latency). Not all devices are created equal; some vendors produce devices with lower latencies than others. Another strategy is examining the routing protocol, either static or dynamic. Dynamic routing will increase performance in networks that have many possible routes from oneTrimsize Trim Size: 8in x 10inFitzergald c09.tex V2 - July 2, 2014 8:59 P.M. Page 263Improving WAN Performance 263FIGURE 9-12 Improving performance of metropolitan and local area networksPerformance Checklist Increase Computer and Device Performance • Upgrade devices • Change to a more appropriate routing protocol (either static or dynamic) Increase Circuit Capacity • Analyze message traffic and upgrade to faster circuits where needed • Check error rates Reduce Network Demand • Change user behavior • Analyze network needs of all new systems • Move data closer to userscomputer to another and in which message traffic is “bursty”—that is, in which traffic occurs in spurts, with many messages at one time, and few at others. But dynamic routing imposes an overhead cost by increasing network traffic. In some cases, the traffic and status information sent between computers accounts for more than 50% of all WAN message traffic. This is clearly a problem because it drastically reduces the amount of network capacity available for users’ messages. Dynamic routing should use no more than 10–20% of the network’s total capacity.9.6.2 Improving Circuit Capacity The first step is to analyze the message traffic in the network to find which circuits are approaching capacity. These circuits then can be upgraded to provide more capacity. Less-used circuits can be downgraded to save costs. A more sophisticated analysis involves examining why circuits are heavily used. For example, in Figure 9-2, the circuit from San Francisco to Vancouver may be heavily used, but much traffic on this circuit may not originate in San Francisco or be destined for Vancouver. It may, for example, be going from Los Angeles to Toronto, suggesting that adding a circuit here would improve performance to a greater extent than upgrading the San Francisco-to-Vancouver circuit. The capacity may be adequate for most traffic but not for meeting peak demand. One solution may be to add a packet-switched service that is used only when demand exceeds the capacity of the dedicated circuit network. The use of a service as a backup for heavy traffic provides the best of both worlds. The lower-cost dedicated circuit is used constantly, and the backup service is used only when necessary to avoid poor response times. Sometimes a shortage of capacity may be caused by a faulty circuit. As circuits deteriorate, the number of errors increases. As the error rate increases, throughput falls because more messages have to be retransmitted. Before installing new circuits, monitor the existing ones to ensure that they are operating properly or ask the common carrier to do it.9.6.3 Reducing Network Demand There are many ways to reduce network demand. One step is to require a network impact statement for all new application software developed or purchased by the organization. This focuses attention on the network impacts at an early stage in application development. Another simple approach is to use data compression techniques for all data in the network.Trimsize Trim Size: 8in x 10inFitzergald c09.tex V2 - July 2, 2014 8:59 P.M. Page 264264 Chapter 9 Wide Area Networks Another more difficult approach is to shift network usage from peak or high-cost times to lower-demand or lower-cost times. For example, the transmission of detailed sales and inventory reports from a retail store to headquarters could be done after the store closes. This takes advantage of off-peak rate charges and avoids interfering with transmissions requiring higher priority such as customer credit card authorizations. The network also can be redesigned to move data closer to the applications and people who use them. This also will reduce the amount of traffic in the network. Distributed database applications enable databases to be spread across several different computers. For example, instead of storing customer records in one central location, you could store them according to region.9.7 IMPLICATIONS FOR MANAGEMENT As the amount of digital computer data flowing through and WANs has increased and as those networks have become increasingly digital, the networking and telecommunications vice president role has significantly changed over the past 10 years. Traditionally this vice president has been responsible for computer communications; today in most companies, this individual is also responsible for telephone and voice services. T carrier, SONET, and old technologies such as ATM have traditionally dominated the WAN market. However, with the growing use of VPNs and Ethernet and MPLS services, we are beginning to see a major change. In the early 1990s, the costs of WANs were quite high relative to other types of networks. As these networks have changed to increasingly digital technologies, and as competition has increased with the introduction of new companies and new technologies (e.g., VPNs and Ethernet services), costs have begun to drop. More firms are now moving to implement software applications that depend on low-cost WANs, and cloud architectures are becoming common. The same factors that caused the LAN and BN to standardize on a few technologies (Ethernet and wireless Ethernet) are now acting to shape the future of the WAN. We believe that within 5 years, T carrier and frame relay will disappear and will be replaced by Ethernet, IP, and MPLS services. These changes have also had significant impacts on the manufacturers of networking equipment designed for WANs. Market shares and stock prices have shifted dramatically over the last 5 years in favor of companies with deep experience in backbone technologies (e.g., Ethernet) and Internet technologies (e.g., IP) as those technologies spread into the WAN market.SUMMARYDedicated-Circuit Networks A dedicated circuit is leased from the common carrier for exclusive use 24 hours per day, 7 days per week. You must carefully plan the circuits you need because changes can be expensive. The three common architectures are ring, star, and mesh. T carrier circuits have a set of digital services ranging from FT1 (64 Kbps) to T1 (1.5 Mbps) to T3 (45 Mbps). A SONET service uses fiber optics to provide services ranging from OC-1 (51 Mbps) to OC-192 (10 Gbps).Packet-Switched Networks Packet switching is a technique in which messages are split into small segments. The user buys a connection into the common carrier cloud and pays a fixed fee for the connection into the network and for the number of packets transmitted. Frame relay is an older service that provides data rates of 64 Kbps to 45 Mbps.Trimsize Trim Size: 8in x 10inFitzergald c09.tex V2 - July 2, 2014 8:59 P.M. Page 265Key terms 265Ethernet services use Ethernet and IP to transmit packets at speeds between 1 Mbps and 100 Gbps. Two newer services are MPLS and IP that provide speeds from 64 Kbps to as much as 40 Gbps.VPN Networks A VPN provides a packet service network over the Internet. The sender and receiver have VPN devices that enable them to send data over the Internet in encrypted form through a VPN tunnel. Although VPNs are inexpensive, traffic delays on the Internet can be unpredictable.The Best Practice WAN Design For small WANs with low to moderate data transmission needs, VPN or frame relay services are reasonable alternatives. For high-traffic networks (10–50 Mbps), Ethernet, IP, or MPLS services are a good choice, but some organizations may prefer the more mature—and therefore proven—T3 services. For very high-traffic networks (50 Mbps to 100 Gbps), Ethernet, IP, or MPLS services are a dominant choice, but again some organizations may prefer the more mature SONET services. Unless their data needs are stable, network managers often start with more flexible packet-switched services and move to the usually cheaper dedicated-circuit services once their needs have become clear and an investment in dedicated services is safer.Improving WAN Performance One can improve network performance by improving the speed of the devices themselves and by using a better routing protocol. Analysis of network usage can show what circuits need to be increased or decreased in capacity, what new circuits need to be leased, and when additional switched circuits may be needed to meet peak demand. Reducing network demand may also improve performance. Including a network usage analysis for all new application software, using data compression, shifting usage to off-peak times, establishing priorities for some applications, or redesigning the network to move data closer to those who use it are all ways to reduce network demand.KEY TERMS access VPN, 258 Canadian Radio-Television and Telecommunications Commission (CRTC), 246 channel service unit/data service unit (CSU/DSU), 246 committed information rate (CIR), 253 common carrier, 245 dedicated-circuit services, 249 discard eligible (DE), 253 Ethernet services, 254 Encapsulating Security Payload (ESP), 260 extranet VPN, 258Federal Communications Commission (FCC), 246 fractional T1 (FT1), 251 frame relay, 253 full-mesh architecture, 248 interexchange carrier (IXC), 246 Internet Service Provider (ISP), 257 intranet VPN, 258 IPSec, 258 L2TP, 258 latency, 262 layer-2 VPN, 258 layer-3 VPN, 258 local exchange carrier (LEC), 246maximum allowable rate (MAR), 253 mesh architecture, 248 multiprotocol label switching (MPLS), 255 packet assembly/disassembly (PAD), 252 packet services, 255 Packet-switched services, 251 partial-mesh architecture, 248 permanent virtual circuits (PVC), 253 point of presence (POP), 253public utilities commission (PUC), 246 ring architecture, 247 star architecture, 248 switched virtual circuit (SVC), 253 synchronous digital hierarchy (SDH), 251 synchronous optical network (SONET), 251 T carrier circuit, 249 T1, T2, T3, T4 circuits, 249 virtual private network (VPN), 257 VPN gateway, 257 VPN software, 258Trimsize Trim Size: 8in x 10inFitzergald c09.tex V2 - July 2, 2014 8:59 P.M. Page 266266 Chapter 9 Wide Area NetworksQUESTIONS 1. What are common carriers, local exchange carriers, and interexchange carriers? 2. Who regulates common carriers and how is it done? 3. How does MPLS work? 4. Compare and contrast dedicated-circuit services and packet-switched services. 5. Is a WAN that uses dedicated circuits easier or harder to design than one that uses packet-switched circuits? Explain. 6. Compare and contrast ring architecture, star architecture, and mesh architecture. 7. What are the most commonly used T carrier services? What data rates do they provide? 8. Distinguish among T1, T2, T3, and T4 circuits. 9. Describe SONET. How does it differ from SDH? 10. How do packet-switching services differ from other WAN services? 11. Where does packetizing take place? 12. Compare and contrast frame relay, MPLS, and Ethernet services. 13. Which is likely to be the longer-term winner: IP, MPLS, or Ethernet services?14. Explain the differences between CIR and MAR. 15. How do VPN services differ from common carrier services? 16. Explain how VPN services work. 17. Compare the three types of VPN. 18. How can you improve WAN performance? 19. Describe five important factors in selecting WAN services. 20. Are Ethernet services a major change in the future of networking or a technology blip? 21. Are there any WAN technologies that you would avoid if you were building a network today? Explain. 22. Suppose you joined a company that had a WAN composed of SONET, T carrier, and frame relay services, each selected to match a specific network need for a certain set of circuits. Would you say this was a well-designed network? Explain. 23. It is said that frame relay services and dedicatedcircuit services are somewhat similar from the perspective of the network designer. Why?EXERCISES A. Find out the data rates and costs of T carrier services in your area. B. Find out the data rates and costs of packet-switched and dedicated-circuit services in your area. C. Investigate the WAN of a company in your area. Draw a network map. D. Using Figure 9-9: a. Suppose the example used a layer-2 VPN protocol called L2TP. Draw the messages and the packets they would contain.b. Suppose the Web server was an email server. Draw the messages from the email server to the employee’s computer. Show what packets would be in the message. c. Suppose the office connects to its ISP using metro Ethernet. What packets would be in the message from the office router to the ISP? d. Suppose the employee connects to the ISP using a layer-2 protocol called XYZ. What packets would be in the message from the employes’s router to the ISP?MINICASES I. Cookies Are Us Cookies Are Us runs a series of 100 WAN. What type of a WAN architecture and WAN sercookie stores across the midwestern United States and vice would you recommend? Why? central Canada. At the end of each day, the stores send II. MegaCorp MegaCorp is a large manufacturing firm sales and inventory data to headquarters, which uses that operates five factories in Dallas, four factories in the data to ship new inventory and plan marketing Los Angeles, and five factories in Albany, New York. It campaigns. The company has decided to move to a new operates a tightly connected order management systemTrimsize Trim Size: 8in x 10inFitzergald c09.tex V2 - July 2, 2014 8:59 P.M. Page 267Hands-On Activity 9B 267that coordinates orders, raw materials, and inventory IV. across all 14 factories. What type of WAN architecture and WAN service would you recommend? Why? III. Sunrise Consultancy Sunrise Consultancy is a medium-sized consulting firm that operates 17 offices V. around the world (Dallas, Chicago, New York, Atlanta, Miami, Seattle, Los Angeles, San Jose, Toronto, Montreal, London, Paris, Sao Paulo, Singapore, Hong VI. Kong, Sydney, and Bombay). They have been using Internet connections to exchange email and files, but the volume of traffic has increased to the point that they now want to connect the offices via a WAN. Volume VII. is low but expected to grow quickly once they implement a new knowledge management system. What type of a WAN topology and WAN service would you recommend? Why?Cleveland Transit Reread Management Focus 9-1. What other alternatives do you think Cleveland Transit considered? Why do you think they did what they did? Air China Reread Management Focus 9-2. What other alternatives do you think Air China considered? Why do you think they did what they did? Marietta City Schools Reread Management Focus 9-3. What alternatives do you think Marietta City Schools considered? Why do you think they did what they did? Cisco Reread Management Focus 9-4. What other alternatives do you think that Cisco considered? Why do you think they did what they did?CASE STUDY NEXT-DAY AIR SERVICE See the companion Web site at www.wiley.com/college/fitzgerald.HANDS-ON ACTIVITY 9A Examining Wide Area Neworks There are millions of WANs in the world. Some are run by common carriers and are available to the public. Others are private networks run by organizations for their internal use only. Thousands of these networks have been documented on the Web. Explore the Web to find networks offered by common carriers and compare the types of network circuits they have. Now do the same for public and private organizations to see what they have. Figure 9-13 shows the network map for Zayo, a large common carrier (see zayo.com). Thisfigure shows the circuits running at 100 Gbps that connect major cities in the United States. Zayo has a much larger network that includes portions that run slower than 100 Gbps, but the network has hundreds of sites and is too hard to show in one figure. Deliverable Print or copy two different WAN maps. Does the WAN use only one type of circuits, or are there a mix of technologies in use?HANDS-ON ACTIVITY 9B Examining VPNs with Wireshark1. Start the VPN software on your computer.If you want to see VPNs in action and understand how they protect your data as they move over the Internet, you can sniff your packets with Wireshark. To do this lab, you’ll have to have a VPN you can use. This will normally be available from your school. In this exercise, you’ll use Wireshark to sniff the packets with and without the VPN. Before you start, you’ll need to download and install Wireshark, a packet sniffer software package, on your computer.2. Start a Web browser (e.g., Internet Explorer) and go to a Web site. 3. Start Wireshark and click on the Capture menu item. This will open up a new menu (see the very top of Figure 9-14). Click on Interfaces. 4. This will open a new window that will enable you to select which interface you want to capture packets from. Figure 9-14 shows you the threeTrimsize Trim Size: 8in x 10inFitzergald c09.tex V2 - July 2, 2014 8:59 P.M. Page 268268 Chapter 9 Wide Area NetworksSeattleBostonMinneapolis Newark Salt Lake CitySan Francisco Sacramento San JoseNew York City PhiladelphiaChicagoWashington D.C.DenverModesto Las VegasMemphisLos AngelesAtlantaPhoenixDallas El PasoLegend 100 G POP 100 G RouteFIGURE 9-13Austin HoustonNew OrleansMiami100 Gbps network for a U.S. Internet Service Providerinterfaces I have on my computer. The first interface is a dial-up modem that I never use. The second interface (labeled “Broadcom NetXtreme Gigabit Ethernet Driver”) is my Ethernet local area network. It has the IP address of 192.168.1.104. The third interface (labeled “WN (PPP/SLIP) Interface”) is the VPN tunnel; it has an IP address of 156.56.198.144 and only appears when you start the VPN software and log in to a VPN gateway. If you do a WhoIs on this IP address (see Chapter 5 for WhoIs), you will see that this IP address is owned by Indiana University. When I logged into my VPN software, it assigned this IP address to the tunnel so that all IP packets that leave my computer over this tunnel will appear to be from a computer on a subnet on the Indiana University campus that is connected to the VPN gateway. Your computer will have different interfaces and IP addresses because your network is different than mine, but the interfaces should be similar.7. A screen similar to that in Figure 9-15 will appear. After a few seconds, go back to Wireshark and click the Interface menu item and then click Stop. 8. The top window in Figure 9-15 shows the packets that are leaving the computer through the tunnel. Click on a packet to look at it. The middle window in this figure shows what’s inside the packet. We see an Ethernet frame, an IP packet, a UDP datagram, and an Encapsulating Security Payload packet (which is the ESP packet). Notice that you cannot see anything inside the ESP packet because its contents are encrypted. All packets in this tunnel will only flow to and from my computer (192.168.1.104) and the VPN gateway (156.56.245.15). 9. Now we want to look at the packets that are sent by your computer into the VPN tunnel. No one else can see these packets. You can see them only because they are on your computer and you’re looking at them as they move from your traditional network software to your VPN software.5. Start by capturing packets on your regular Ethernet interface. In my case, this is the second interface. Click on the Start button beside the Ethernet driver (which is 192.168.1.104 on my computer).10. Click on the Wireshark Capture menu item and click Interfaces.6. Go to your Web browser and use it to load a new Web page, which will cause some packets to move through your network.11. Click on the Start button beside your VPN interface, which in my case in Figure 9-14 is the button in front of 156.56.198.144.Trimsize Trim Size: 8in x 10inFitzergald c09.tex V2 - July 2, 2014 8:59 P.M. Page 269Hands-On Activity 9B 269FIGURE 9-14Starting Wireshark12. Go to your Web browser and use it to load a new Web page, which will cause some packets to move through your network. 13. A screen similar to that in Figure 9-16 will appear. After a few seconds, go back to Wireshark and click the Interface menu item, and then click Stop. 14. The top window in Figure 9-16 shows the packets that are entering the VPN tunnel. Click onan HTTP packet to look at it (you may need to scroll to find one). The middle window in this figure shows what’s inside the packet. We see an Ethernet frame, an IP packet, a TCP segment, and an HTTP request (for a page called/enterprise/on www.tatacommunications.com). We can see these because they have not yet entered the VPN software to be encrypted. These are the packets that would normally be sent over the InternetTrimsize Trim Size: 8in x 10inFitzergald c09.tex V2 - July 2, 2014 8:59 P.M. Page 270270 Chapter 9 Wide Area NetworksFIGURE 9-15Viewing encrypted packetsif we have not started the VPN software. Like all normal Internet messages, they can be read by anyone with sniffer software such as Wireshark.2. What layer-2, -3, and -4 protocols are used on your network to transmit an HTTP packet when your VPN is active?Deliverables3. Look inside the VPN tunnel as was done in step 14. What layer-2, -3, and -4 protocols are used inside the encrypted packet?1. What layer-2, -3, and -4 protocols are used on your network to transmit an HTTP packet without a VPN?Trimsize Trim Size: 8in x 10inFitzergald c09.tex V2 - July 2, 2014 8:59 P.M. Page 271Hands-On Activity 9B 271FIGURE 9-16Packets that enter the VPN tunnelTrimsize Trim Size: 8in x 10inFitzergald c09.tex V2 - July 2, 2014 8:59 P.M. Page 272272 Chapter 9 Wide Area NetworksHANDS-ON ACTIVITY 9C Examining VPNs with Tracert Tracert is a useful tool for seeing how VPNs affect routing. To do this lab, you’ll have to have a VPN you can use. This will normally be available from your school. Tracert is a simple command that comes preinstalled on all Windows and Mac computers. Tracert enables you to see the route that an IP packet takes as it moves over the Internet from one computer to another. Do this activity when you are not on campus. 1. Tracert is a command line command, so you first need to start the CMD window. Click Start, then Run, and then type CMD and press enter. This will open the command window, which is usually a small window with a black background. You can change the size and shape of this window, but it is not as flexible as a usual window. 2. We will first trace the route from your computers to two other computers without using the VPN. So make sure your VPN is not connected. 3. We’ll start by tracing the route from your computer to a computer on the campus of the site you VPN into. In my case, I VPN into my university, which is Indiana University. I can choose to trace the route to any computer on campus. I’ll choose our main Web server (www.iu.edu). At the command prompt, type tracert and the URL of a computer on your campus. 4. The top half of Figure 9-17 shows the route from my computer to www.iu.edu. There are 18 hops and it takes about 35 ms. The first hop does not report information because this feature is turned off in the router at my house for security reasons. You can see that my ISP is Comcast (hop 6). If you compare this to the tracert at the end of Chapter 5, you’ll notice that my ISP changed (and thus the route into the Internet changed) between the time I wrote Chapter 5 and this chapter; Comcast bought Insight in my town of Bloomington, Indiana. 5. Now trace the route from your computer to another computer on the Internet. The bottom of Figure 9-17 shows the route from my computer to www. google.com. There are 17 hops, and it takes about 35 ms. You’ll see that the routes to IU and Google are the same until step 6, and then they diverge. 6. Next we want to see what happens when you have a VPN connection. Start your VPN software and connect into the VPN gateway at your school.7. Trace the route from your computer to the same computer as in step 3. At the command prompt, type tracert and the URL of a computer on your campus. 8. The top half of Figure 9-18 shows the route from my computer to www.iu.edu. There are two hops and it takes about 35 ms. The VPN is in operation and is transparent to my networking software, which thinks it is on the same subnet as the VPN gateway. Therefore, it thinks there is just one hop from my computer to the subnet’s gateway, the VPN gateway. You’ll see that the time is still about 35 ms, so the packet is still traveling the same 18 hops to get there; it’s just that the tracert packet is encapsulated and doesn’t see all the hops through the VPN tunnel. 9. Now do a tracert to the same computer as you did in step 5. The bottom of Figure 9-18 shows the route from my computer to www.google.com. There are nine hops and it takes about 43 ms. Of course, the first hop is really 17 hops and 35 ms; this is again hidden from view. As we explained in the text, when the VPN is connected, all packets go from your computer to the VPN gateway on your campus before being routed to the final destination. You can see from this figure that this adds additional hops and time to packets that are not going to your campus, compared to not using the VPN. You can also see that once the packets leave the VPN gateway, they are ordinary packets; they are no longer encrypted and protected from view. The VPN provides security only to and from the VPN gateway on your campus, not beyond it. Therefore, you should use your VPN if you have security concerns to and from campus (e.g., someone sniffing your packets). But if most of your work is going to be off campus, then the VPN increases the time it takes to send and receive packets and only provides security protection over the last section from your computer to your school’s campus. Using the VPN may not be worth the additional response time it imposes on you. Deliverables 1. What are the routes from your computer to your campus Web server with and without the VPN? 2. What are the routes from your computer to www.google.com with and without the VPN?Trimsize Trim Size: 8in x 10inFitzergald c09.tex V2 - July 2, 2014 8:59 P.M. Page 273Hands-On Activity 9C 273FIGURE 9-17Tracert without a VPNTrimsize Trim Size: 8in x 10inFitzergald c09.tex V2 - July 2, 2014 8:59 P.M. Page 274274 Chapter 9 Wide Area NetworksFIGURE 9-18Tracert with a VPNHANDS-ON ACTIVITY 9D Apollo Residence Network Design Apollo is a luxury residence hall that will serve honor students at your university. We described the residence in Hands-On Activities at the end of Chapters 7 and 8. The university has recognized that work is going virtual, with more and more organizations building virtual teams with members drawn from different parts of the organization who work together from different cities, instead of meeting face-to-face. It has joined together with five universities across the United States and Canada (located in Boston, Los Angeles, Atlanta, Dallas, and Toronto) to form a consortium of universities that will build virtual team experiences into their programs. The universities have decided to start with their honors programs, and each has created a required course that involves its students working with students at the otheruniversities to complete a major project. The students will use collaboration software such as email, chat, Google Docs, Skype, and WebEx to provide text, audio, and video communication. These tools can be used over the Internet, but to ensure that there are no technical problems, the universities have decided to build a separate private WAN that connects the six honors residences on each university campus (in the five cities listed, plus your university). Deliverable Your team was hired to design the WAN for this six-university residence network. Figure 9-19 provides a list of possible WAN services you can use. Specify what services you will use at each location and how the six locations will be connected. Provide the estimated monthly operating cost of the network.Trimsize Trim Size: 8in x 10inFitzergald c09.tex V2 - July 2, 2014 8:59 P.M. Page 275Hands-On Activity 9D 275WAN Service Monthly Cost per Circuit Data Rate T11.5 Mbps$500T345 Mbps$5000SONET OC-152 Mbps$5500SONET OC-3155 Mbps$12,000SONET OC-12622 Mbps$30,000Frame Relay1.5 MbpsFrame Relay45 Mbps$250, plus $.01 per 10,000 packets and $10 per PVC routing table entry; MAR available at $50 per 1.5 Mbps $3500, plus $.01 per 10,000 packets and $10 per PVC routing table entry; MAR available at $50 per 1.5 MbpsEthernet1 Mbps$1000, plus $.01 per 10,000 packetsEthernet5 Mbps$1200, plus $.01 per 10,000 packetsEthernet10 Mbps$1500, plus $.01 per 10,000 packetsEthernet20 Mbps$2000, plus $.01 per 10,000 packetsEthernet50 Mbps$2500, plus $.01 per 10,000 packetsEthernet100 Mbps$3000, plus $.01 per 10,000 packetsEthernet200 Mbps$3500, plus $.01 per 10,000 packetsEthernet500 Mbps$4000, plus $.01 per 10,000 packetsEthernet1 Gbps$5000, plus $.01 per 10,000 packetsMPLS1.5 Mbps$500, plus $.01 per 10,000 packetsMPLS45 Mbps$2500, plus $.01 per 10,000 packetsMPLS52 Mbps$3000, plus $.01 per 10,000 packetsMPLS155 Mbps$5000, plus $.01 per 10,000 packetsMPLS622 Mbps$10,000, plus $.01 per 10,000 packetsIP Services1.5 Mbps$500, plus $.01 per 10,000 packetsIP Services45 Mbps$2500, plus $.01 per 10,000 packetsIP Services52 Mbps$3000, plus $.01 per 10,000 packetsIP Services155 Mbps$5000, plus $.01 per 10,000 packetsIP Services622 Mbps$10,000, plus $.01 per 10,000 packetsFIGURE 9-19Monthly costs for WAN servicesTrimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 276C H A P T E R 10 THE INTERNET This chapter examines the Internet in more detail to explain how it works and why it is a network of networks. The chapter also examines Internet access technologies, such as DSL and cable modem, as well as the possible future of the Internet.OBJECTIVESOUTLINE◾ Understand the overall design of the Internet ◾ Be familiar with DSL, cable modem, fiber to the home, and WiMax ◾ Be familiar with possible future directions of the Internet10.1 Introduction 10.2 How the Internet Works 10.2.1 Basic Architecture 10.2.2 Connecting to an ISP 10.2.3 The Internet Today 10.3 Internet Access Technologies 10.3.1 Digital Subscriber Line (DSL) 10.3.2 Cable Modem10.3.3 Fiber to the Home 10.3.4 WiMax 10.4 The Future of the Internet 10.4.1 Internet Governance 10.4.2 Building the Future 10.5 Implications for Management Summary10.1 INTRODUCTION The Internet is the most used network in the world, but it is also one of the least understood. There is no one network that is the Internet. Instead, the Internet is a network of networks—a set of separate and distinct networks operated by various national and state government agencies, nonprofit organizations, and for-profit corporations. The Internet exists only to the extent that these thousands of separate networks agree to use Internet protocols and to exchange data packets among one another. When you are on the Internet, your computer (iPad, smart phone, etc.) is connected to the network of an Internet Service Provider (ISP) that provides network services for you. Messages flow between your client device and the ISP’s network. Suppose you request a Web page on CNN.com, a Web site that is outside of your ISP’s network. Your HTTP request flows from your device through your ISP’s network and through other networks that link your ISP’s network to the network of the ISP that provides Internet services for CNN. Each of these networks is separate and charges its own customers for Internet access but permits traffic from other networks to flow through them. In many ways, the Internet is like the universe (see Figure 10-1). Each of us works in his or her own planet with its own rules (i.e., ISP) but each planet is interconnected with all the others. The Internet is simultaneously a strict, rigidly controlled club in which deviance from the rules is not tolerated and a freewheeling, open marketplace of ideas. All networks that connect to the Internet must rigidly conform to an unyielding set of standards for the transport and network layers; without these standards, data communication would not be possible. At the same time, content and new application protocols are developed freely and without restriction, and quite literally anyone in the world is allowed to comment on proposed changes. 276Trimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 277How the Internet Works 277FIGURE 10-1 The Internet is a lot like the universe—many independent systems linked togetherYou are hereSource: NASAIn this chapter, we first explain how the Internet really works and look inside the Seattle Internet exchange point, at which more than 150 separate Internet networks meet to exchange data. We then turn our attention to how you as an individual can access the Internet and what the Internet may look like in the future.10.2 HOW THE INTERNET WORKS 10.2.1 Basic Architecture The Internet is hierarchical in structure. At the top are the very large national Internet Service Providers (ISPs), such as AT&T and Sprint, that are responsible for large Internet networks. These national ISPs, called tier 1 ISPs, connect together and exchange data at Internet exchange points (IXPs) (Figure 10-2). For example, AT&T, Sprint, Verizon, Qwest, Level 3, and Global Crossing are all tier 1 ISPs that have a strong presence in North America. In the early 1990s, when the Internet was still primarily run by the U.S. National Science Foundation (NSF), the NSF established four main IXPs in the United States to connect the major tier 1 ISPs (the 1990s name for an IXP was network exchange point or NAP). When the NSF stopped funding the Internet, the companies running these IXPs began charging the ISPs for connections, so today the IXPs in the United States are all not-for-profit organizations or commercial enterprises run by various common carriers such as AT&T and Sprint. As the Internet has grown, so too has the number of IXPs; today there are several dozen IXPs in the United States with more than a hundred more spread around the world. IXPs were originally designed to connect only large tier 1 ISPs. These ISPs in turn provide services for their customers and also to regional ISPs (sometimes called tier 2 ISPs) such as Comcast or BellSouth. These tier 2 ISPs rely on the tier 1 ISPs to transmit their messages to ISPs in other countries. Tier 2 ISPs, in turn, provide services to their customers and to local ISPs (sometimes called tier 3 ISPs) who sell Internet access to individuals. AsTrimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 278278 Chapter 10 The Internet Tier 3 ISPTier 3 ISP Tier 2 ISPTier 1 ISPTier 2 ISP IXP Tier 1 ISPTier 1 ISPTier 2 ISPTier 1 ISP IXP Tier 1 ISPTier 1 ISPTier 2 ISPTier 2 ISP Tier 2 ISPTier 3 ISPTier 2 ISPTier 2 ISP Tier 2 ISP Tier 3 ISPTier 2 ISPTier 2 ISPIXP Tier 2 ISPTier 2 ISP Tier 3 ISPTier 3 ISPTier 3 ISPTier 2 ISP Tier 3 ISP Tier 3 ISP Tier 3 ISPTier 3 ISPTier 3 ISP Tier 3 ISPFIGURE 10-2 Basic Internet architecture. ISP = Internet service provider and IXP = Internet exchange point the number of ISPs grew, smaller IXPs emerged in most major cities to link the networks of these regional ISPs. Because most IXPs and ISPs now are run by commercial firms, many of the early restrictions on who could connect to whom have been lifted. Most now openly solicit business from all tiers of ISPs and even large organizations. Regional and local ISPs often will have several connections into other ISPs to provide backup connections in case one Internet connection fails. In this way, they are not dependent on just one higher-level ISP. In general, ISPs at the same level do not charge one another for transferring messages they exchange. That is, a national tier 1 ISP does not charge another national tier 1 ISP to transmit its messages. This is called peering. Figure 10-2 shows several examples of peering. It is peering that makes the Internet work and that has led to the belief that the Internet is free. This is true to some extent, but higher-level ISPs normally charge lower-level ISPs to transmit their data (e.g., a tier 1 will charge a tier 2 and a tier 2 will charge a tier 3). And of course, any ISP will charge individuals like us for access! In October 2005, an argument between two national ISPs shut down 45 million Web sites for a week. The two ISPs had a peering agreement, but one complained that the other was sending it more traffic than it should, so it demanded payment and stopped accepting traffic, leaving large portions of the network isolated from the rest of the Internet. The dispute was resolved, and they began accepting traffic from each other and the rest of the Internet again. In Figure 10-2, each of the ISPs is an autonomous system, as defined in Chapter 5. Each ISP is responsible for running its own interior routing protocols and for exchanging routing information via the Border Gateway Protocol (BGP) exterior routing protocol (see Chapter 5) at IXPs and at any other connection points between individual ISPs.Trimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 279How the Internet Works 27910.2.2 Connecting to an ISP Each of the ISPs is responsible for running its own network that forms part of the Internet. ISPs make money by charging customers to connect to their part of the Internet. Local ISPs charge individuals for access, whereas national and regional ISPs (and sometimes local ISPs) charge larger organizations for access. Each ISP has one or more points of presence (POP). A POP is simply the place at which the ISP provides services to its customers. To connect into the Internet, a customer must establish a circuit from his or her location into the ISP POP. For individuals, this is often done using a DSL modem or cable modem, as we discuss in the next section. Companies can use these same technologies, or they can use the WAN technologies we discussed in the previous chapter. Once connected, the user can begin sending TCP/IP packets from his or her computer to the POP.MANAGEMENT10-1 Inside the Seattle Internet Exchange PointFOCUSThe Seattle Internet Exchange (SIX) was established as a nonprofit organization in April 1997 by two small ISPs with offices in Seattle’s Westin Building. The ISPs had discovered that to send data to each other’s network in the same building, their data traveled to Texas and back. They decided to peer and installed a 10Base-T Ethernet hub connecting their two networks so that traffic flowed between them much more quickly. In June 1997, a third small ISP joined and connected its network into the hub. Gradually word spread and other small ISPs began to connect. In May 1998, the first tier 1 ISP connected its network, and traffic grew enough so that the old 10 Mbps hub was replaced by a 10/100 Ethernet switch. As an aside, we’ll note that the switch you have in your house or apartment today probably has more capacity than this switch. In February 1999, Microsoft connected its network, and traffic took off again. In September 2001, the 10/100 Ethernet switch was replaced by a 10/100/1000 Ethernet switch. The current configuration is a set of 3 large GbE switches connected together with 80 Gbps Ethernet circuits. There are an additional 4 GbE switches located in the Westin Building connected to these three core switches with 1 Gbps Ethernet. SIX also has additional facilities located around the Seattle area that connectto the three core switches via 10–40 Gbps Ethernet, depending on location. Today, SIX offers several types of Ethernet connections to its clients. The first 1 Gbps connection is free; all subsequent 1 Gbps connections cost a one-time fee of $1,000, whereas 10 Gbps connections cost a one-time fee of $5,000. Of course, you have to pay a common carrier to provide a network circuit into the Westin Building and then pay the Westin Building a small fee to run a fiber cable from the building’s MDF to the SIX network facility. Traffic averages between 100 and 250 Gbps across the SIX network. More than 150 ISPs (e.g., AT&T, World Communications, Bell Canada, and Saskatchewan Telecommunications) and corporations (e.g., Google, Facebook, and Yahoo) are members of SIX. About half of the members are open to peering with anyone who joins SIX. The rest, mostly tier 1 ISPs and well-known corporations, are selective or restrictive in their peering agreements, which means that they are already well-connected into the Internet and want to ensure that any new peering agreements make business sense. Adapted from: www.seattleix.netIt is important to note that the customer must pay for both Internet access (paid to the ISP) and for the circuit connecting from the customer’s location to the POP (usually paid to the local exchange carrier [e.g., BellSouth and AT&T], but sometimes the ISP also can provide circuits). For a T1 connection, for example, a company might pay the local exchange carrier $250 per month to provide the T1 circuit from its offices to the ISP POP and also pay the ISP $250 per month to provide the Internet access.Trimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 280280 Chapter 10 The Internet An ISP POP is connected to the other POPs in the ISP’s network. Any messages destined for other customers of the same ISP would flow within the ISP’s own network. In most cases, the majority of messages entering the POP are sent outside of the ISP’s network and thus must flow through the ISP’s network to the nearest IXP and, from there, into some other ISP’s network. This can be less efficient than one might expect. For example, suppose you are connected to the Internet via a local tier 3 ISP in Minneapolis and request a Web page from another organization in Minneapolis. A short distance, right? Maybe not. If the other organization uses a different local tier 3 ISP, which in turn uses a different regional tier 2 ISP for its connection into the Internet, the message may have to travel all the way to the nearest IXP, which could be in Chicago, Dallas, or New York, before it can move between the two separate parts of the Internet.10.2.3 The Internet Today Figure 10-3 shows the North American backbone of a major ISP as it existed while we were writing this book; it will have changed by the time you read this. As you can see, it has many Internet circuits across the United States and Canada. Many interconnect in Chicago, where many ISPs connect into the Chicago IXP. It also connects into major IXPs in Reston, Virginia; Miami; Los Angeles; San Jose; Palo Alto; Vancouver; Calgary; Toronto; and Montreal. Today, the backbone circuits of the major U.S. national ISPs operate at SONET OC-192 (10 Gbps). A few are now experimenting with OC-768 (80 Gbps), and several are in the planning stages with OC-3072 (160 Gbps). This is good because the amount of Internet traffic has been growing rapidly. As traffic increases, ISPs can add more and faster circuits relatively easily, but where these circuits come together at IXPs, bottlenecks are becoming more common. Network vendors such as Cisco and Juniper are making larger and larger switches capable of handling these high-capacity circuits, but it is a daunting task. When circuit capacities increase byFIGURE 10-3A typical Internet backbone of a major ISPTrimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 281Internet Access Technologies 281100%, switch manufacturers also must increase their capacities by 100%. It is simpler to go from a 622 Mbps circuit to a 10 Gbps circuit than to go from a 20 Gbps switch to a 200 Gbps switch.10.3 INTERNET ACCESS TECHNOLOGIES There are many ways in which individuals and organizations can connect to an ISP. Most individuals use DSL or cable modem. As we discussed in the preceding section, many organizations lease T1, T3, or Ethernet circuits into their ISPs. DSL and cable modem technologies are commonly called broadband technologies because they provide high-speed communications.1 It is important to understand that Internet access technologies are used only to connect from one location to an ISP. Unlike the WAN technologies in the previous chapter, Internet access technologies cannot be used for general-purpose networking from any point to any point. In this section, we discuss four principal Internet access technologies (DSL, cable modem, fiber to the home, and WiMax). Of course, many users connect to the Internet using Wi-Fi on their laptops from public access points in coffee shops, hotels, and airports. Since we discussed Wi-Fi in Chapter 7, we won’t discuss it here.10.3.1 Digital Subscriber Line (DSL) Digital subscriber line (DSL) is a family of point-to-point technologies designed to provide high-speed data transmission over traditional telephone lines.2 The reason for the limited capacity on traditional telephone circuits lies with the telephone and the switching equipment at the end offices. The actual cable in the local loop from a home or office to the telephone company end office is capable of providing much higher data transmission rates. So DSL usually requires just changing the telephone equipment, not rewiring the local loop, which is what has made it so attractive. Architecture DSL uses the existing local loop cable but places different equipment on the customer premises (i.e., the home or office) and in the telephone company end office. The equipment that is installed at the customer location is called the customer premises equipment (CPE). Figure 10-4 shows one common type of DSL installation. (There are other forms.) The CPE in this case includes a line splitter that is used to separate the traditional voice telephone transmission from the data transmissions. The line splitter directs the telephone signals into the normal telephone system so that if the DSL equipment fails, voice communications are unaffected. The line splitter also directs the data transmissions into a DSL modem, which is sometimes called a DSL router. This is both a modem and an FDM multiplexer (see Chapter 3). The DSL modem produces Ethernet packets so it can be connected directly into a computer or to a router and can serve the needs of a small network. Most DSL companies targeting home users combine all of these devices (and a wireless access point) into one device so that consumers just have to install one box, rather than separate line splitters, modems, routers, switches, and access points. Figure 10-4 also shows the architecture within the local carrier’s end office (i.e., the telephone company office closest to the customer premises). The local loops from many customers enter and are connected to the main distribution facility (MDF). The MDF 1 Broadband is a technical term that means “analog transmission” (see Chapter 3). The new broadband technologiesoften use analog transmission, so they were called broadband. However, the term broadband has been corrupted in common usage so that to most people it usually means “high-speed.” 2 More information can be found from the DSL forum (www.adsl.com) and the ITU-T under standard G.992.Trimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 282282 Chapter 10 The Internet Mainframe Customer Premises DSL ModemLocal Carrier End Office Voice Telephone NetworkMain Distribution FacilityLine SplitterLocal Loop Router Switch TelephoneSwitch DSL Access MultiplexerComputerISP POPComputerCustomer PremisesISP POPCustomer PremisesFIGURE 10-4 Digital subscriber line (DSL) architecture. ISP = Internet service provider and POP = point of presenceworks like the CPE line splitter; it splits the voice traffic from the data traffic and directs the voice traffic to the voice telephone network and the data traffic to the DSL access multiplexer (DSLAM). The DSLAM demultiplexes the data streams and converts them into digital data, which are then distributed to the ISPs. Some ISPs are collocated, in that they have their POPs physically in the telephone company end offices. Other ISPs have their POPs located elsewhere. Types of DSL There are many different types of DSL. The most common type today is asymmetric DSL (ADSL). ADSL uses frequency division multiplexing (see Chapter 3) to create three separate channels over the one local loop circuit. One channel is the traditional voice telephone circuit. A second channel is a relatively high-speed data channel downstream from the carrier’s end office to the customer. The third channel is a slightly slower data channel upstream from the customer to the carrier’s end office.3 ADSL is called asymmetric because its two data channels have different speeds. Each of the two data channels is further multiplexed using time division multiplexing so they can be subdivided. The size of the two digital channels depends on the distance from the CPE to the end office. The shorter the distance, the higher the speed, because with a shorter distance, the circuit suffers less attenuation and higher-frequency signals can be used, providing a greater bandwidth for modulation. Figure 10-5 lists the common types of DSL.3 Becausethe second data channel is intended primarily for upstream data communication, many authors imply that this is a simplex channel, but it is actually a set of half-duplex channels.Trimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 283Internet Access Technologies 283FIGURE 10-5 Some typical digital subscriber line data ratesMaximum Downstream RateMaximum Upstream Rate3 Mbps512 Kbps6 Mbps640 Kbps12 Mbps1.5 Mbps18 Mbps1.5 Mbps24 Mbps3 Mbps10.3.2 Cable Modem One alternative to DSL is the cable modem, a digital service offered by cable television companies. The Data over Cable Service Interface Specification (DOCSIS) standard is the dominant one. DOCSIS is not a formal standard but is the one used by most vendors of hybrid fiber coax (HFC) networks (i.e., cable networks that use both fiber-optic and coaxial cable). As with DSL, these technologies are changing rapidly.4 Architecture Cable modem architecture is very similar to DSL—with one very important difference. DSL is a point-to-point technology, whereas cable modems use shared multipoint circuits. With cable modems, each user must compete with other users for the available capacity. Furthermore, because the cable circuit is a multipoint circuit, all messages on the circuit go to all computers on the circuit. If your neighbors were hackers, they could use pocket sniffers such as Wireshark (see Chapter 4) to read all messages that travel over the cable, including yours. Figure 10-6 shows the most common architecture for cable modems. The cable TV circuit enters the customer premises through a cable splitter that separates the data transmissions from the TV transmissions and sends the TV signals to the TV network and the data signals to the cable modem. The cable modem (both a modem and frequency division multiplexer) translates from the cable data into Ethernet packets, which then are directed into a computer to a router for distribution in a small network. As with DSL, cable modem companies usually combine all of these separate devices into one or two devices to make it easier for the home consumer to install. The cable TV cable entering the customer premises is a standard coaxial cable. A typical segment of cable is shared by anywhere from 300 to 1,000 customers, depending on the cable company that installed the cable. These 300–1,000 customers share the available data capacity, but of course, not all customers who have cable TV will choose to install cable modems. This coax cable runs to a fiber node, which has an optical-electrical (OE) converter to convert between the coaxial cable on the customer side and fiber-optic cable on the cable TV company side. Each fiber node serves as many as half a dozen separate coaxial cable runs. The fiber nodes are in turn connected to the cable company distribution hub (sometimes called a headend) through two separate circuits: an upstream circuit and a downstream circuit. The upstream circuit, containing data traffic from the customer, is connected into a cable modem termination system (CMTS). The CMTS contains a series of cable modems/multiplexers and converts the data from cable modem protocols into protocols needed for Internet traffic, before passing them to a router connected to an ISP POP. Often, the cable company is a regional ISP, but sometimes it just provides Internet access to a third-party ISP. 4 Moreinformation can be found at www.cablemodem.com.Trimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 284284 Chapter 10 The Internet Cable Company Fiber NodeCustomer Premises Cable ModemCable Company Distribution HubLine Splitter Downstream Optical/ Electrical ConverterRouterCombinerTV Video NetworkUpstreamSwitchTVComputerComputerShared Coax Cable SystemCable Company Fiber NodeRouter Cable Modem Termination SystemCustomer PremisesCustomer PremisesISP POPFIGURE 10-6 Cable modem architecture. ISP = Internet service provider and POP = point of presence The downstream circuit to the customer contains both ordinary video transmissions from the cable TV video network and data transmissions from the Internet. Downstream data traffic enters the distribution hub from the ISP POP and is routed through the CMTS, which produces the cable modem signals. This traffic is then sent to a combiner, which combines the Internet data traffic with the ordinary TV video traffic and sends it back to the fiber node for distribution. Types of Cable Modems The DOCSIS standard provides many types of cable modems. The maximum speed is about 150 Mbps downstream and about 100 Mbps upstream, although most cable TV companies provide at most 50 Mbps downstream and 10 Mbps upstream. Cable modems can be configured to limit capacity, so the most common speeds offered by most cable providers range from 1 to 20 Mbps downstream and from 1 to 5 Mbps upstream. Of course, this capacity is shared, so an individual user will only see this when no other computers on his or her segment are active. MANAGEMENT10-2 Internet Speed TestFOCUSThe speed of your Internet connection depends on many things, such as your computer’s settings, the connection from your computer to your ISP, and the connections yourISP has into the Internet. Many Internet sites enable you to test how fast your Internet connection actually is. Our favorite is speedtest.netTrimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 285Internet Access Technologies 28510.3.3 Fiber to the Home Fiber to the home (FTTH) is exactly what it sounds like: running fiber-optic cable into the home. The traditional set of hundreds of copper telephone lines that run from the telephone company switch office is replaced by one fiber-optic cable that is run past each house or office in the neighborhood. Data are transmitted down the signal fiber cable using wavelength division multiplexing (WDM), providing hundreds or thousands of separate channels. As of 2014, FTTH was installed in about 10 million homes in the United States. The largest implementations were in test market cities in North Dakota, Virginia, and Pennsylvania. Architecture FTTH architecture is very similar to DSL and cable modem. At each subscriber location, an optical network unit (ONU) (also called an optical network terminal [ONT]) acts like a DSL modem or cable modem and converts the signals in the optical network into an Ethernet format. The ONU acts as an Ethernet switch and can also include a router. FTTH is a dedicated point-to-point service like DSL, not a shared multipoint service like cable modem. Providers of fiber to the home can use either active optical networking or passive optical networking to connect the ONU in the customer’s home. Active networking means that the optical devices require electrical power and works in much the same way as traditional electronic switches and routers. Passive optical networking devices require no electrical current and thus are quicker and easier to install and maintain than traditional electrical-based devices, but because they are passive, the optical signal fades quickly, giving a maximum range of about 10 miles. Types of FTTH There are many types of FTTH, and because FTTH is a new technology, these types are likely to evolve as FTTH enters the market and becomes more widely adopted. Common types provide 10–100 Mbps downstream and 1–10 Mbps upstream. The most commonly used type provides 15 Mbps downstream and 4 Mbps upstream. Newer versions have been announced targeted at business users that provide 1 Gbps down and 100 Mbps up.10.3.4 WiMax WiMax (short for Worldwide Interoperability for Microwave Access) is the commercial name for a set of standards developed by the IEEE 802.16 standards group. WiMax is a family of technologies that is much like the 802.11 Wi-Fi family. It reuses many of the Wi-Fi components and was designed to connect easily into Ethernet LANs. WiMax can be used as a fixed wireless technology to connect a house or an office into the Internet, but its future lies in its ability to connect mobile laptops and smart phones into the Internet. WiMax is a relatively old technology. The problem is that computer manufacturers have been waiting for ISPs to build WiMax networks before they build WiMax into their computers. Meanwhile, ISPs have been waiting for computer manufacturers to provide WiMax-capable computers before they build WiMax networks. And so we have a catch-22. This changed in 2011 when Intel developed a cheap WiMax chip set. Many computer manufacturers are including WiMax on their laptops, so ISPs have started building WiMax networks. Many large cities now have WiMax networks, and this will gradually spread to other parts of the country. Most experts envision a future where both Wi-Fi and WiMax coexist. Laptops and smart phones will connect to Wi-Fi networks in home and office locations where Wi-Fi is available. If Wi-Fi is not available and the user has subscribed to WiMax services, then the laptop or smart phone will connect to the WiMax network.Trimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 286286 Chapter 10 The Internet Architecture Although WiMax can be used in fixed locations to provide Internet access to homes and offices, we will focus on mobile use as this is likely to be the most common use. Mobile WiMax works in much the same way as Wi-Fi. The laptop or smart phone has a WiMax network interface card (NIC) and uses it to establish a connection to a WiMax access point (AP). Many devices use the same AP, so WiMax is a shared multipoint service in which all computers must take turns transmitting. Media access control is controlled access, using a version of the 802.11 point coordination function (PCF). WiMax uses the 2.3, 2.5, and 3.5 GHz frequency ranges in North America, although additional frequency ranges may be added. The maximum range is from 3 to 10 miles, depending on interference and obstacles between the device and the AP. Most WiMax providers in the United States are using effective ranges of 0.5–1.5 miles when they install WiMax APs. Types of WiMax There are several types of WiMax available, with new versions under development. The most common type of mobile wireless provides speeds of 40 Mbps, shared among all users of the same AP. Some providers have versions that run at 70 Mbps. New versions under development promise speeds of 300 Mbps.10.4 THE FUTURE OF THE INTERNET 10.4.1 Internet Governance Because the Internet is a network of networks, no one organization operates the Internet. The closest thing the Internet has to an owner is the Internet Society (internetsociety.org). The Internet Society is an open-membership professional society with about 150 organizational members and 65,000 individual members in more than 100 countries, including corporations, government agencies, and foundations that have created the Internet and its technologies. Because membership is open, anyone, including students, is welcome to join and vote on key issues facing the Internet. Its mission is to ensure “the open development, evolution and use of the Internet for the benefit of all people throughout the world.” It works in three general areas: public policy, education, and standards. In terms of public policy, the Internet Society participates in the national and international debates on important issues such as censorship, copyright, privacy, and universal access. It delivers training and education programs targeted at improving the Internet infrastructure in developing nations. Its most important activity lies in the development and maintenance of Internet standards. It works through four interrelated standards bodies: the Internet Engineering Task Force, Internet Engineering Steering Group, Internet Architecture Board, and Internet Research Task Force. The Internet Engineering Task Force (IETF) (www.ietf.org) is a large, open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet. The IETF works through a series of working groups, which are organized by topic (e.g., routing, transport, and security). The request for comments (RFCs) that form the basis for Internet standards are developed by the IETF and its working groups. Closely related to the IETF is the Internet Engineering Steering Group (IESG). The IESG is responsible for technical management of IETF activities and the Internet standards process. It administers the process according to the rules and procedures that have been ratified by the Internet Society trustees. The IESG is directly responsible for the actions associated with entry into and movement along the Internet “standards track,” including final approval of specifications as Internet standards. Each IETF working group is chaired by a member of the IESG.Trimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 287The Future of the Internet 287TECHNICAL10-1 Registering an Internet Domain NameFOCUS Until the 1990s, there was only a moderate number of computers on the Internet. One organization was responsible for registering domain names (sets of application layer addresses) and assigning IP addresses for each top-level domain (e.g., .COM). Network Solutions, for example, was the sole organization responsible for domain name registrations for the .COM, .NET, and .ORG domains. In October 1998, the Internet Corporation for Assigned Names and Numbers (ICANN) was formed to assume responsibility for the IP address space and domain name system management. In spring 1999, ICANN established the Shared Registration System (SRS) that enabled many organizations to perform domain name registration and address assignment using a shared database. More than 1,000 organizations are now accredited by ICANN as registrars and are permitted to use the SRS. Each registrar has the right to assign names and addresses in one or moretop-level domains. For a list of registrars and the domains they serve, see www.internic.com If you want to register a new domain name and obtain an IP address, you can contact any accredited registrar for that top-level domain. One of the oldest privately operated registrars is register.com. Each registrar follows the same basic process for registering a name and assigning an address, but each may charge a different amount for its services. To register a name, you must first check to see if it is available (i.e., that no one else has registered it). If the name has already been registered, you can find out who owns it and perhaps attempt to buy it from the owner. If the domain name is available, you will need to provide the IP address of the DNS server that will be used to store all IP addresses in the domain. Most large organizations have their own DNS servers, but small companies and individuals often use the DNS of their ISP.Whereas the IETF develops standards and the IESG provides the operational leadership for the IETF working groups, the Internet Architecture Board (IAB) provides strategic architectural oversight. The IAB attempts to develop conclusions on strategic issues (e.g., top-level domain names, use of international character sets) that can be passed on as guidance to the IESG or turned into published statements or simply passed directly to the relevant IETF working group. In general, the IAB does not produce polished technical proposals but rather tries to stimulate action by the IESG or the IETF that will lead to proposals that meet general consensus. The IAB appoints the IETF chairperson and all IESG members, from a list provided by the IETF nominating committee. The IAB also adjudicates appeals when someone complains that the IESG has failed. The Internet Research Task Force (IRTF) operates much like the IETF: through small research groups focused on specific issues. Whereas IETF working groups focus on current issues, IRTF research groups work on long-term issues related to Internet protocols, applications, architecture, and technology. The IRTF chairperson is appointed by the IAB.10.4.2 Building the Future The Internet is changing. New applications and access technologies are being developed at lightning pace. But these innovations do not change the fundamental structure of the Internet. It has evolved more slowly because the core technologies (TCP/IP) are harder to change gradually; it is difficult to change one part of the Internet without changing the attached parts. Many organizations in many different countries are working on dozens of different projects in an attempt to design new technologies for the next version of the Internet. The two primary American projects working on the future Internet got started at about the sameTrimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 288288 Chapter 10 The InternetSeattle Olympia SpokanePortland EugeneSand Point Missoula HelenaFargoBozemanBismarckLaurelAlbanyMinneapolisBoiseMadisonReno SacramentoKansas CityPuebloSt. LouisTulsaAlbuquerque PhoenixMemphisChattanooga DallasWashington DC RaleighNashvilleTucson El PasoPhiladelphiaIndianapolis Ashburn Cincinnati LouisvilleColumbiaLas VegasLos Angeles San DiegoPittsburghDenverSalt Lake CityBostonNew YorkClevelandChicagoSunnyvale San Luis ObispoBuffalo Milwaukee DetroitCharlotteAtlantaJacksonSan AntonioJacksonville Baton RougeHoustonAdvanced Layer 3 Service (Research/Education and peering IP)Advanced Layer 2 Service (SDN Ethernet add/drop)Advanced Layer 1 Service (Optical wave add/drop)FIGURE 10-7Internet2 network map. Reproduced by permission of Internet2®time in 1996. The U.S. National Science Foundation provided $100 million to start the Next Generation Internet (NGI) program, and 34 universities got together to start what turned into Internet2 . Internet2 comprises about 400 universities, corporations, government agencies, and organizations from more than 100 countries with a primary focus to develop advanced networking as well as other innovative technologies for research and education. Figure 10-7 shows the major high-speed circuits in the Internet2 network. All the circuits in Internet2 are at least OC-192 (10 Gbps). Many circuits are 100 Gbps, with 1 Tbps circuits being tested. The access points are called gigapops, so named because they provide a point of presence at gigabit speeds. Gigapops also usually provide a wider range of services than traditional IXPs, which are primarily just data exchange points. All of the gigapops provide connections at layer 1, the physical layer. Many of the gigapops also provide layer 2 connections (usually Ethernet) and layer 3 connections (usually IPv6). Typical connection fees range from $6,000 per year for 1 Gbps to $165,000 per year for 100 Gbps. Besides providing very high-speed Internet connections, these networks are intended to experiment with new protocols that 1 day may end up on the future Internet. For example, most circuits run IPv6 as the primary network layer protocol rather than IPv4. Most are also working on new ways to provide quality of service (QoS) and multicasting. Internet2 is also developing new applications for a high-speed Internet, such as tele-immersion and videoconferencing.®®®®®Trimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 289Summary 28910.5 IMPLICATIONS FOR MANAGEMENT Several years ago, there was great concern that the traffic on the Internet would exceed its capacity. The growth of traffic on the Internet was increasing significantly faster than the construction of new Internet circuits; several experts predicted the collapse of the Internet. It did not happen for the simple reason that companies could make money by building new circuits and charging for their use. Today, there are a large number of fiber-optic circuits that have been built but not yet turned on. Wavelength division multiplexing technologies mean that 10–20 times more data can now be transmitted through the same fiber-optic cable (see Chapter 3). Many countries, companies, and universities are now building the Next Generation Internet using even newer, experimental, very high-speed technologies. The Internet will not soon run out of capacity. In recent years, there has been a blossoming of new “broadband” technologies for higher speed Internet access. Individuals and organizations can now access the Internet at relatively high speeds—much higher speeds than we would have even considered reasonable 5–10 years ago. This means that it is now simple to move large amounts of data into most homes and businesses in North America. As a result, software applications that use the Internet can provide a much richer multimedia experience than ever before. In previous chapters, we described how there has been a significant reduction in a number of different technologies in use in LANs, backbones, and WANs over the past few years. We have entered that stage with regard to Internet access technologies. Today there are many choices, but over the next 2 years a few dominant standards will emerge, and the market will solidify around those standards. Organizations that invest in the technologies that ultimately become less popular will need to invest significant funds to replace those technologies with the dominant standards. The challenge, of course, is to figure out which technology standards will become dominant. Will it be cable modem and DSL or fiber to the home? Only time will tell.SUMMARYHow the Internet Works The Internet is a set of separate networks, ranging from large national ISPs to midsize regional ISPs to small local ISPs, that connect with one another at IXPs. IXPs charge the ISPs to connect, but similar-sized ISPs usually do not charge each other to exchange data. Each ISP has a set of points of presence through which it charges its users (individuals, businesses, and smaller ISPs) to connect to the Internet. Users connect to a POP to get access to the Internet. This connection may be via DSL, cable modem, or a WAN circuit such as T1 or Ethernet.DSL DSL enables users to connect to an ISP POP over a standard point-to-point telephone line. The customer installs a DSL modem that connects via Ethernet to his or her computer system. The modem communicates with a DSLAM at the telephone company office, which sends the data to the ISP POP. ADSL is the most common type of DSL and often provides 24 Mbps downstream and 3 Mbps upstream.Cable Modem Cable modems use a shared multipoint circuit that runs through the cable TV cable. They also provide the customer with a modem that connects via Ethernet to his or her computer system. The modem communicates with a CMTS at the cable company office, which sends the data to the ISP POP. The DOCSIS standard is the dominant standard, but there are no standard data rates today. Typical downstream speeds range between 10 and 20 Mbps, and typical upstream speeds range between 1 and 5 Mbps.Trimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 290290 Chapter 10 The InternetFiber to the Home FTTH is a new technology that is not widely implemented. It uses fiber-optic cables to provide high-speed data services (e.g., 100 Mbps) to homes and offices. WiMax WiMax works similarly to Wi-Fi in that it enables mobile users to connect into the Internet at speeds of 40–70 Mbps. The Future of the Internet The closest the Internet has to an owner is the Internet Society, which works on public policy, education, and Internet standards. Standards are developed through four related organizations governed by the Internet Society. The IETF develops the actual standards through a series of working groups. The IESG manages IETF activities. The IAB sets long-term strategic directions, and the IRTF works on future issues through working groups in much the same way as the IETF. Many different organizations are currently working on the next generation of the Internet, including Internet2.KEY TERMS asymmetric DSL (ADSL), 282 autonomous systems, 278 broadband technologies, 281 cable modem, 283 cable modem termination system (CMTS), 283 customer premises equipment (CPE), 281 Data over Cable Service Interface Specification (DOCSIS), 283 digital subscriber line (DSL), 281distribution hub, 283 DSL access multiplexer (DSLAM), 282 DSL modem, 281 fiber to the home (FTTH), 285 gigapop, 288 Internet Architecture Board (IAB), 287 Internet Corporation for Assigned Names and Numbers (ICANN), 287 Internet Engineering Steering Group (IESG), 286Internet Engineering Task Force (IETF), 286 Internet exchange point (IXP), 277 Internet Research Task Force (IRTF), 287 Internet Service Provider (ISP), 277 Internet Society, 286 Internet2 , 288 line splitter, 281 local loop, 281 main distribution facility (MDF), 281 mobile wireless, 286®national ISP, 277 optical-electrical (OE) converter, 283 optical network unit (ONU), 285 peering, 278 point of presence (POP), 279 regional ISP, 277 request for comment (RFC), 286 tier 1 ISP, 277 tier 2 ISP, 277 tier 3 ISP, 277 WiMax, 285QUESTIONS 1. 2. 3. 4. 5.6.7. 8. 9. 10. 11. 12.What is the basic structure of the Internet? Explain how the Internet is a network of networks. What is an IXP? What is a POP? Explain one reason why you might experience long response times in getting a Web page from a server in your own city. What type of circuits are commonly used to build the Internet today? What type of circuits are commonly used to build Internet2 ? Compare and contrast cable modem and DSL. Explain how DSL works. How does a DSL modem differ from a DSLAM? Explain how ADSL works. Explain how a cable modem works. What is an OE converter? A CMTS?®13. Which is better, cable modem or DSL? Explain. 14. Explain how FTTH works. 15. What are some future technologies that might change how we access the Internet? 16. Explain how WiMax works. 17. What are the principal organizations responsible for Internet governance, and what do they do? 18. How is the IETF related to the IRTF? 19. What is the principal American organization working on the future of the Internet? 20. What is Internet2 ? 21. What is a gigapop? 22. Today, there is no clear winner in the competition for broadband Internet access. What technology or technologies do you think will dominate in 2 years’ time? Why?®Trimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 291Minicases 29123. Would you be interested in subscribing to 100 Mbps FTTH for a monthly price of $100? Why or why not?24. Many experts predicted that small, local ISPs would disappear as regional and national ISPs began offering local access. This hasn’t happened. Why?EXERCISES®A. Describe the current network structure of Internet2 . B. Provide the service details (e.g., pricing and data rates) for at least one high-speed Internet access service provider in your area. C. Some people are wiring their homes for 100Base-T. Suppose a friend who is building a house asks you what—if any—network to put inside the house andwhat Internet access technology to use. What would you recommend? D. Provide service details (e.g., pricing and data rates) for WiMax in your area or a large city such as New York or Los Angeles. E. Report the prices and available connections for one IXP, such as the Seattle IXP.MINICASES I. Cathy’s Collectibles Your cousin Cathy runs a the price that an ISP would charge to provide both the part-time business out of her apartment. She buys faster circuit and Internet services on it. Why? and sells collectibles such as antique prints, baseball IV. Organic Foods Organic Foods operates organic food cards, and cartoon cells and has recently discovered stores in Toronto. The store operates like a tradithe Web with its many auction sites. She has begun tional grocery store but offers only organically grown buying and selling on the Web by bidding on colproduce and meat, plus a wide array of health food lectibles at lesser-known sites and selling them at a products. Organic Foods sells memberships, and its profit at more well-known sites. She downloads and 3,000 members receive a discount on all products they uploads lots of graphics (pictures of the items she’s buybuy. There are also special member events and sales ing and selling). She asks you for advice. Figure 10-8 promotions each month. Organic Foods wants to open shows some of the available Internet services and their a new Internet site that will enable it to email its prices. Explain the differences in these services and members monthly and provide up-to-date information make a recommendation. and announcements about new products, sales proII. Surfing Sam Sam likes to surf the Web for fun, to motions, and member events on its Web site. It has buy things, and to research for his classes. Figure 10-8 two options. First, it could develop the software on its shows some of the available Internet services and their own server in its office and connect the office (and the prices. Explain the differences in these services and server) to the Internet via DSL, T1, or similar conmake a recommendation. nection from its offices to an ISP. Alternately, it could III. Cookies Are Us Cookies Are Us runs a series of 100 pay the ISP to host the Web site on its servers and cookie stores across the midwestern United States and just connect the office to the ISP for Internet service. central Canada. At the end of each day, the stores Figure 10-8 shows some of the available Internet serexpress-mail a diskette or two of sales and invenvices and their prices, whereas Figure 9-19 in the pretory data to headquarters, which uses the data to ship vious chapter shows faster circuits that could be used new inventory and plan marketing campaigns. They to connect to an ISP for Internet services. You should have decided to move data over a WAN or the Interincrease the prices in Figure 9-19 by 50% to get the net. What type of a WAN topology and service (see price that an ISP would charge to provide both the Chapter 9) or Internet connection would you recomfaster circuit and Internet services on it. Web hosting mend? Figure 10-8 shows some of the available Interwould cost $500 to $1,000 per month, depending on net services and their prices, whereas Figure 9-19 in the traffic. Which would you recommend, and what the previous chapter shows faster circuits that could size of an Internet connection would you recommend be used to connect to an ISP for Internet services. You if you choose to host it yourself? Justify your choice. should increase the prices in Figure 9-19 by 50% to getTrimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 292292 Chapter 10 The Internet ServiceSpeedCostDSL3 Mbps down; 512 Kbps up$306 Mbps down; 640 Kbps up$3512 Mbps down; 1.5 Mbps up$4018 Mbps down; 1.5 Mbps up$4524 Mbps down; 3 Mbps up$5545 Mbps down; 6 Mbps up$6550 Mbps down; 25 Mbps up$20050 Mbps down; 50 Mbps up$3005 Mbps down; 1 Mbps up$4010 Mbps down; 1.5 Mbps up$4516 Mbps down; 3 Mbps up$7050 Mbps down; 10 Mbps up$11075 Mbps down; 15 Mbps up$150100 Mbps down; 20 Mbps up$20015 Mbps down; 5 Mbps up$5050 Mbps down; 25 Mbps up$7075 Mbps down; 35 Mbps up$1005 Mbps down; 5 Mbps up$50 for up to 6 Gb of data per month;Cable ModemFTTHWiMax$80 for up to 12 Gb of data per month 10 Mbps down; 10 Mbps up$80 for up to 6 Gb of data per month; $120 for up to 12 Gb of data per month20 Mbps down; 20 Mbps up$120 for up to 6 Gb of data per month; $150 for up to 12 Gb of data per monthFIGURE 10-8Internet pricesCASE STUDY NEXT-DAY AIR SERVICE See the Web site at www.wiley.com/college/fitzgeraldHANDS-ON ACTIVITY 10A Seeing the Internet The Internet is a network of networks. One way to see this is by using the VisualRoute software. VisualRoute is a commercial package but provides a demonstration on its Web site. Go to visualroute.com and register to use their freeservice. Then enter a URL and watch as the route from your computer to the destination is traced and graphed. Figure 10-9 shows the route from my house in Indiana to the City University of Hong Kong.Trimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 293Minicases 293FIGURE 10-9Visual trace routeAnother interesting site is the Internet Traffic Report (www.internettrafficreport.com). This site shows how busy the parts of the Internet are in real time. The main page enables you to see the current status of the major parts of the world, including a “traffic index” that rates performance on a 100-point scale. You can also see the average response time at key Internet NAPs, MAEs, and peering points (at least those that have agreed to be monitored), which is an average of 135 milliseconds. It also shows the global packet loss rates—the percentage of packets discarded due to transmission errors (an average of 3% today). By clicking on a region of the world, you can see the same statistics for routers in that region. If you click on a specific router, you can see a graph of its performance over the past 24 hours. Figure 10-10 shows the statistics for one router operated by Sprint. You can also get traffic reports for Internet2 at noc.net.internet2.edu/i2network/live-network-status.html. The “weathermap,” as Internet2 calls it, shows traffic in both directions because the circuits are full duplex. You can also click on any circuit to see a graph of traffic over the last 24 hours.2. Use the Internet traffic report to find the average response time and packet loss in Asia, Australia, and North America. Pick a router in North America and report its typical response time for the past 24 hours.®3. How busy are the Internet2 links from Chicago to Atlanta right now? What was the peak traffic on these circuits over the last 24 hours?®®Deliverables 1. Trace the route from your computer to CNN.com and to the University of Oxford www.ox.ac.ukFIGURE 10-10Internet traffic reportsTrimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 294294 Chapter 10 The InternetHANDS-ON ACTIVITY 10B Measuring Your Speed The download and upload speeds you get on the Internet depend partly on the type of Internet access you have. The speeds also depend on how your ISP is connected to other ISPs, how busy the Internet is today, and how busy the Web site you’re working with is. The last two factors (Internet traffic and Web traffic at the server) are beyond your control. However, you can chose what type of Internet connection you have and who your ISP is. Many sites on the Internet can test the speed of your Internet connection. Our favorite speed site is speedtest.net. Speedtest.net has lots of advertising; ignore it (and any “windows scan” offer) and just do the speed test. You begin by selecting a server for the test. I selected a server in Nova Scotia and tested how fast theFIGURE 10-11connection was between it and my computer in Indiana, which is connected to the Internet using Comcast’s cable modem service. Figure 10-11 shows that my download speed was 28.86 Mbps and my upload speed was 5.63 Mbps. I ran the same test to a server closer to my computer in Indiana and got about the same speeds. The speeds to a server in Mexico were about 1.5 Mbps down and 1.0 up. DeliverableA speed test on my computer in Indiana1. Test the upload and download speeds to a server close to your computer and to one far away from you.Trimsize Trim Size: 8in x 10inFitzergald c10.tex V2 - July 17, 2014 2:40 P.M. Page 295Hands-On Activity 10C 295HANDS-ON ACTIVITY 10C Apollo Residence Network Design Apollo is a luxury residence hall that will serve honor students at your university. We described the residence in Hands-On Activities at the end of Chapters 7 and 8. Your university has a good connection to the Internet through the high-speed Internet2 network, which you’ll recall is a network that connects about 400 research and education organizations around the world over some very high-speed Internet circuits. While much of the Internet traffic from the university goes to and comes from the other universities and organizations that are part of Internet2 , a substantial portion of traffic goes to and comes from the commercial Internet. This is especially true for traffic generated by undergraduate students who make up the majority of the intended population of the Apollo Residence. Therefore, the university has decided to build a second connection into the Internet for primary use by the®®students of the Apollo Residence. This Internet connection will also provide a backup connection for the university’s main Internet connection, just in case Internet2 experiences problems.®Deliverables Your team was hired to select the Internet circuit. Figure 10-8 provides a list of possible Internet services you can use. Figure 9-19 in the previous chapter shows faster circuits that could be used to connect to an ISP for Internet services. You should increase the prices in Figure 9-19 by 50% to get the price that an ISP would charge for providing both the faster circuit and Internet services on it. Specify what service(s) you will use. Provide the estimated monthly operating cost of the circuit(s).Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 296PART FOUR NETWORK MANAGEMENTC H A P T E R 11 NETWORK SECURITY This chapter describes why networks need security and how to provide it. The first step in any security plan is risk assessment, understanding the key assets that need protection, and assessing the risks to each. A variety of steps can be taken to prevent, detect, and correct security problems due to disruptions, destruction, disaster, and unauthorized access.OBJECTIVESOUTLINE◾ ◾ ◾ ◾Be familiar with the major threats to network security Be familiar with how to conduct a risk assessment Understand how to ensure business continuity Understand how to prevent intrusion11.1 Introduction 11.1.1 Why Networks Need Security 11.1.2 Types of Security Threats 11.1.3 Network Controls 11.2 Risk Assessment 11.2.1 Develop risk measurement criteria 11.2.2 Inventory IT assets 11.2.3 Identify Threats 11.2.4 Document Existing Controls 11.2.5 Identify Improvements 11.3 Ensuring Business Continuity 11.3.1 Virus Protection 11.3.2 Denial-of-Service Protection 11.3.3 Theft Protection11.3.4 Device Failure Protection 11.3.5 Disaster Protection 11.4 Intrusion Prevention 11.4.1 Security Policy 11.4.2 Perimeter Security and Firewalls 11.4.3 Server and Client Protection 11.4.4 Encryption 11.4.5 User Authentication 11.4.6 Preventing Social Engineering 11.4.7 Intrusion Prevention Systems 11.4.8 Intrusion Recovery 11.5 Best Practice Recommendations 11.6 Implications for Management Summary11.1 INTRODUCTION Business and government have always been concerned with physical and information security. They have protected physical assets with locks, barriers, guards, and the military since organized societies began. They have also guarded their plans and information with coding systems for at least 3,500 years. What has changed in the last 50 years is the introduction of computers and the Internet. The rise of the Internet has completely redefined the nature of information security. Now companies face global threats to their networks and, more importantly, to their data. Viruses and worms have long been a problem, but credit card theft and identity theft, two of the fastest-growing crimes, pose immense liability to firms who fail to protect their customers’ data. Laws have been slow to catch up, despite the fact that breaking into a computer in the United States—even without causing damage—is now a federal crime punishable by 296Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 297Introduction 297a fine and/or imprisonment. Nonetheless, we have a new kind of transborder cyber crime against which laws may apply but that will be very difficult to enforce. The United States and Canada may extradite and allow prosecution of digital criminals operating within their borders, but investigating, enforcing, and prosecuting transnational cyber crime across different borders is much more challenging. And even when someone is caught, he or she faces a lighter sentence than a bank robber. Computer security has become increasingly important over the last 10 years with the passage of the Sarbanes-Oxley Act (SOX) and the Health Insurance Portability and Accountability Act (HIPAA). However, despite these measures, the number of security incidents is growing. For example, Verizon’s 2013 security report concluded that at least 174 million electronic records had been compromised in more than 855 separate security incidents. These incidents included not only viruses but also industrial espionage, fraud, extortion, and identity theft. The years when creating a virus was for fun are long gone. The goal of these attacks was money. You probably heard on the news that the large companies Zappos and Target had been victims of cyberattacks and that millions of the credit card information of millions of their customers had been stolen. However, a company of any size can be the target of an attack. According to Symantec, more than 50% of all targeted companies had fewer than 2,500 employees because they often have weaker security. Many organizations, private and public, focus on helping individuals, organizations, and governments to protect themselves from criminals operating on the Internet (cybercriminals). These include CERT (Computer Emergency Response Team) at Carnegie Mellon University, APWG (Anti-Phishing Working Group), the Russian-based Kaspersky Lab, McAfee, and Symantec. There are three main reasons why there has been an increase in computer security over the past few years. First, in the past, hacking into somebody’s computer was considered to be a hobby, but today being a cybercriminal is a profession. There are professional organizations that one can hire to break into computer networks of specific targets to steal information. We are not talking about ethical hacking (when a company hires another company to test its security) but rather hackers who, for a fee, will steal information, intellectual property, or computer code. These attacks are called targeted attacks, in which cybercriminals not only try to exploit technical vulnerabilities but also try to “hack the human” via social engineering or phishing emails. These targeted attacks can be very sophisticated, and any organization can become a victim because every organization has data that can be of value to cybercriminals. Second, hacktivism (the use of hacking techniques to bring attention to a larger political or social goal) has become more common. Hacktivism combines illegal hacking techniques with digital activism and usually targets large organizations and governments by sabotaging or defacing their public Web sites to bring attention to the hackers’ social or political cause. For example, in 2010, the group called Anonymous took down Web sites owned by Visa and MasterCard to protest their denial of payments to the WikiLeaks. This type of threat is not as pervasive as that from cybercriminals, but it has increased in the past few years. Third, the increase in mobile devices offers a very fertile environment for exploitation. More and more frequently, we access our bank accounts, buy items on Amazon, and access our business data through our mobile devices, so cybercriminals are now targeting these mobile devices. These types of attacks often are easier to develop because mobile security is typically weaker than computer security, so they offer a potentially high yield. These trends will increase the value of personal data, and therefore the potential threat to our privacy and the privacy of businesses will increase. It is thus very important for businesses and also individuals to understand their assets, potential threats to these assets, and the way they can protect them. We explore these in the next section of this chapter.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 298298 Chapter 11 Network Security11.1.1 Why Networks Need Security In recent years, organizations have become increasingly dependent on data communication networks for their daily business communications, database information retrieval, distributed data processing, and the internetworking of LANs. The rise of the Internet with opportunities to connect computers and mobile devices anywhere in the world has significantly increased the potential vulnerability of the organization’s assets. Emphasis on network security also has increased as a result of well-publicized security break-ins and as government regulatory agencies have issued security-related pronouncements. The losses associated with the security failures can be huge. An average annual loss of about $350,000 sounds large enough, but this is just the tip of the iceberg. The potential loss of consumer confidence from a well-publicized security break-in can cost much more in lost business. More important than these, however, are the potential losses from the disruption of application systems that run on computer networks. As organizations have come to depend on computer systems, computer networks have become “mission-critical.” Bank of America, one of the largest banks in the United States, estimates that it would cost the bank $50 million if its computer networks were unavailable for 24 hours. Other large organizations have produced similar estimates. Protecting customer privacy and the risk of identity theft also drive the need for increased network security. In 1998, the European Union passed strong data privacy laws that fined companies for disclosing information about their customers. In the United States, organizations have begun complying with the data protection requirements of HIPAA and a California law providing fines up to $250,000 for each unauthorized disclosure of customer information (e.g., if someone were to steal 100 customer records, the fine could be $25 million). As you might suspect, the value of the data stored on most organizations’ networks and the value provided by the application systems in use far exceeds the cost of the networks themselves. For this reason, the primary goal of network security is to protect organizations’ data and application software, not the networks themselves.11.1.2 Types of Security Threats For many people, security means preventing intrusion, such as preventing an attacker from breaking into your computer. Security is much more than that, however. There are three primary goals in providing security: confidentiality, integrity, and availability (also known as CIA). Confidentiality refers to the protection of organizational data from unauthorized disclosure of customer and proprietary data. Integrity is the assurance that data have not been altered or destroyed. Availability means providing continuous operation of the organization’s hardware and software so that staff, customers, and suppliers can be assured of no interruptions in service. There are many potential threats to confidentiality, integrity, and availability. Figure 11-1 shows some threats to a computer center, the data communication circuits, and the attached computers. In general, security threats can be classified into two broad categories: ensuring business continuity and preventing unauthorized access. Ensuring business continuity refers primarily to ensuring availability, with some aspects of data integrity. There are three main threats to business continuity. Disruptions are the loss of or reduction in network service. Disruptions may be minor and temporary. For example, a network switch might fail or a circuit may be cut, causing part of the network to cease functioning until the failed component can be replaced. Some users may be affected, but others can continue to use the network. Some disruptions may also be caused by or result in the destruction of data. For example, a virus may destroy files, or the “crash” of a hard disk may cause files to be destroyed. Other disruptions may be catastrophic. Natural (or human-made) disasters may occur that destroy host computers or large sections ofTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 299Introduction 299FIGURE 11-1 Some threats to a computer center, data communication circuits, and client computersOrganization · Inadequate functional separation · Lack of security responsibilityHardware · Protection failure · Destruction Software · Unauthorized –Access or use –Copying –Modification –Destruction –Theft · Errors and omissions Data · Unauthorized –Access –Copying –Modification –Destruction –Theft Input/output · Disaster · Vandalism · Fraud, theft, and extortion · Errors and omissionsExternal online: Satellite computer Network computer Internet computer Cellphone PDAPersonnel · Dishonesty · Gross error · IncompetenceComputer center Confidentiality Integrity AvailabilityPhysical security · Unauthorized access · Inadequate safety · Transportation exposure External people · Disaster · Vandalism · Fraud, theft, and extortionData communication circuit · Network outage · Illegal access · Denial of serviceThreats as above plus · Identification · Authorization · Validation · ControlInternal online: Satellite computer Network computer Network devicesUser · Social engineering · IP spoofing · Hacking · Virus · TrojanUsersUsersthe network. For example, hurricanes, fires, floods, earthquakes, mudslides, tornadoes, or terrorist attacks can destroy large parts of the buildings and networks in their path. Preventing unauthorized access, also referred to as intrusion, refers primarily to confidentiality, but also to integrity, as an intruder may change important data. Intrusion is often viewed as external attackers gaining access to organizational data files and resources from across the Internet. However, almost half of all intrusion incidents involve employees. Intrusion may have only minor effects. A curious intruder may simply explore the system, gaining knowledge that has little value. A more serious intruder may be a competitor bent on industrial espionage who could attempt to gain access to information on products under development, or the details and price of a bid on a large contract, or a thief trying to steal customer credit card numbers or information to carry out identity theft. Worse still, the intruder could change files to commit fraud or theft or could destroy information to injure the organization.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 300300 Chapter 11 Network Security sss MANAGEMENT11-1 Same Old Same OldFOCUSNo matter the industry, every company should consider itself to be a target of cybercrime – Target learned this the hard way in December 2013. Russian hacker(s) were able to install malware on the company’s point-of-sale systems (cash registers) and steal the credit card information of more than 40 million individuals. Hackers probably got access to Target’s network using credentials of an HVAC vendor. Investigators said that the malware installed on the point-of-sale systems was neither sophisticated nor novel and was detected by two security systems that Target had installed on its network. Why didn’t security specialists listen to the warnings from their security software? Target, just like any other company, gets bombarded by thousands of attacks each day, and the likelihood of one of them getting through increases each day – just a simple logic of probability. Although some attacks are sophisticated in nature, most of them are well known.One can say, same old same old. Cyberattackers are playing the game of numbers – the more persistent they are in their attacks, the more likely they will get inside a network and gain access to critical information such as credit card numbers. This only reminds us that cybersecurity is a global problem and that everybody who uses the Internet can be and probably is under attack. Therefore, learning about security and investing in it is necessary to survive and strive in the Internet era. Adapted from: “Missed Alarms and 40 Million Stolen Credit Card Numbers: How Target Blew It,” by Michael Riley, Ben Elgin, Dune Lawrence, and Carol Matlack, March 13, 2014, Bloomberg Businessweek (www.businessweek.com) and Krebs on Security (krebsonsecuirty.com)11.1.3 Network Controls Developing a secure network means developing controls. Controls are software, hardware, rules, or procedures that reduce or eliminate the threats to network security. Controls prevent, detect, and/or correct whatever might happen to the organization because of threats facing its computer-based systems. Preventive controls mitigate or stop a person from acting or an event from occurring. For example, a password can prevent illegal entry into the system, or a set of second circuits can prevent the network from crashing. Preventive controls also act as a deterrent by discouraging or restraining someone from acting or proceeding because of fear or doubt. For example, a guard or a security lock on a door may deter an attempt to gain illegal entry. Detective controls reveal or discover unwanted events. For example, software that looks for illegal network entry can detect these problems. They also document an event, a situation, or an intrusion, providing evidence for subsequent action against the individuals or organizations involved or enabling corrective action to be taken. For example, the same software that detects the problem must report it immediately so that someone or some automated process can take corrective action. Corrective controls remedy an unwanted event or an intrusion. Either computer programs or humans verify and check data to correct errors or fix a security breach so it will not recur in the future. They also can recover from network errors or disasters. For example, software can recover and restart the communication circuits automatically when there is a data communication failure. The remainder of this chapter discusses the various controls that can be used to prevent, detect, and correct threats. We also present a general risk assessment framework for identifying the threats and their associated controls. This framework provides a network manager with a good view of the current threats and any controls that are in place to mitigate the occurrence of threats. Nonetheless, it is important to remember that it is not enough just to establish a series of controls; someone or some department must be accountable for the control and securityTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 301Risk Assessment 301of the network. This includes being responsible for developing controls, monitoring their operation, and determining when they need to be updated or replaced. Controls must be reviewed periodically to be sure that they are still useful and must be verified and tested. Verifying ensures that the control is present, and testing determines whether the control is working as originally specified. It is also important to recognize that there may be occasions in which a person must temporarily override a control, for instance, when the network or one of its software or hardware subsystems is not operating properly. Such overrides should be tightly controlled, and there should be a formal procedure to document this occurrence should it happen.11.2 RISK ASSESSMENT The first step in developing a secure network is to conduct a risk assessment. There are several commonly used risk assessment frameworks that provide strategies for analyzing and prioritizing the security risks to information systems and networks. A risk assessment should be simple so that both technical and nontechnical readers can understand it. After reading a risk assessment, anyone should be able to see which systems and network components are at high risk for attack or abuse and which are at low risk. Also, the reader should be able to see what controls have been implemented to protect him or her and what new controls need to be implemented. Three risk assessment frameworks are commonly used: 1. Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) from the Computer Emergency Readiness Team 2. Control Objectives for Information and Related Technology (COBIT) from the Information Systems Audit and Control Association 3. Risk Management Guide for Information Technology Systems (NIST guide) from the National Institute of Standards and Technology Each of these frameworks offers a slightly different process with a different focus. However, they share five common steps: 1. 2. 3. 4. 5.Develop risk measurement criteria Inventory IT assets Identify threats Document existing controls Identify improvements11.2.1 Develop risk measurement criteria Risk measurement criteria are the measures used to evaluate the way a security threat could affect the organization. For example, suppose that a hacker broke in and stole customer credit card information from a company server. One immediate impact to the organization is financial, because some customers are likely to stop shopping, at least in the short term. Depending where the company is located, there may also be some legal impact because some countries and/or states have laws concerning the unauthorized release of personal information. There also may be longer-term impacts to the company’s reputation. Each organization needs to develop its own set of potential business impacts, but the five most commonly considered impact areas are financial (revenues and expenses), productivity (business operations), reputation (customer perceptions), safety (health of customers and employees), and legal (potential for fines and litigation). However, some organizations add other impacts and not all organizations use all of these five because some may not apply. It is important to remember that these impacts are for information systems and networks,Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 302302 Chapter 11 Network Security Impact AreaPriorityLow ImpactMedium ImpactHigh ImpactFinancial ProductivityHigh MediumSales drop by 2%–10% Increase in annual operating expenses between 3% and 6%Sales drop by more than 10% Increase in annual operating expenses by more than 6%ReputationHighLegalMediumSales drop by less than 2% Increase in annual operating expenses by less than 3% Decrease in number of customers by less than 2% Incurring fines or legal fees less than $10,000Decrease in number of customers by 2%–15% Incurring fines or legal fees between $10,000 and $60,000Decrease in number of customers by more than 15% Incurring fines or legal fees exceeding $60,000FIGURE 11-2Sample risk measurement criteria for a Web-based bookstore so although safety is important to most organizations, there may be little impact on safety from information system and network problems. Once the impact areas have been identified, the next step is to prioritize them. Not all impact areas are equally important to all organizations. Some areas may be high priority, some medium, and some low. For example, for a hospital, safety may be the highest priority and financial the lowest. In contrast, for a restaurant, information systems and networks may pose a low (or nonexistent) safety risk (because they are not involved in food safety) but a high priority reputation risk (if, for example, credit card data were stolen). There may be a temptation to say every impact is high priority, but this is the same as saying that all impacts are medium, because you cannot distinguish between them when it comes time to take action. The next step is to develop specific measures of what could happen in each impact area and what we would consider a high, medium, and low impact. For example, one financial impact could be a decrease in sales. What would we consider a low financial impact in terms of a decrease in sales: 1%? 2%? What would be a high impact on sales? These are business decisions, not technology decisions, so they should be made by the business leaders. Figure 11-2 has sample risk measurement criteria for a Web-based bookstore. As you can see, only four of the impact areas apply for this company, because information systems and network security problems would not harm the safety of employees or customers. However, it would be a different case if this were a pharmaceutical company. A threat, such as malware, could cause changes in how a drug is prepared, potentially harming customers (patients) and also employees. As Figure 11-2 suggests, our fictional Web-based book company believes that financial and reputation impacts have high priority, whereas productivity and legal impacts are medium. This figure also provides metrics for assessing the impact of each risk. For example, our fictitious company considers it a low financial impact if their sales were to drop by 2% because of security problems. The financial impact would be high if they were to lose more than 10% of sales.11.2.2 Inventory IT assets An asset is something of value and can be either hardware, software, data, or applications. Figure 11-3 defines six common categories of IT assets. An important type of asset is the mission-critical application, which is an information system that is critical to the survival of the organization. It is an application that cannot be permitted to fail, and if it does fail, the network staff drops everything else to fix it. For example, for an Internet bank that has no brick-and-mortar branches, the Web site is a mission-critical application. If the Web site crashes, the bank cannot conduct business with its customers. Mission-critical applications are usually clearly identified so that their importance is not overlooked.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 303Risk Assessment 303sssTECHNICAL11-1 Basic Control Principles of a Secure NetworkFOCUS • The less complex a control, the better. • A control’s cost should be equivalent to the identified risk. It often is not possible to ascertain the expected loss, so this is a subjective judgment in many cases. • Preventing a security incident is always preferable to detecting and correcting it after it occurs. • An adequate system of internal controls is one that provides “just enough” security to protect the network, taking into account both the risks and costs of the controls. • Automated controls (computer-driven) always are more reliable than manual controls that depend on human interaction. • Controls should apply to everyone, not just a few select individuals. • When a control has an override mechanism, make sure that it is documented and that the override procedure has its own controls to avoid misuse. • Institute the various security levels in an organization on the basis of “need to know.” If you do not need to know, you do not need to access the network or the data. • The control documentation should be confidential. • Names, uses, and locations of network components should not be publicly available. • Controls must be sufficient to ensure that the network can be audited, which usually means keeping historical transaction records.FIGURE 11-3 Types of assets. DNS = Domain Name Service; DHCP = Dynamic Host Control Protocol; LAN = local area network; WAN = wide area network• When designing controls, assume that you are operating in a hostile environment. • Always convey an image of high security by providing education and training. • Make sure the controls provide the proper separation of duties. This applies especially to those who design and install the controls and those who are responsible for everyday use and monitoring. • It is desirable to implement entrapment controls in networks to identify attackers who gain illegal access. • When a control fails, the network should default to a condition in which everyone is denied access. A period of failure is when the network is most vulnerable. • Controls should still work even when only one part of a network fails. For example, if a backbone network fails, all local area networks connected to it should still be operational, with their own independent controls providing protection. • Don’t forget the LAN. Security and disaster recovery planning has traditionally focused on host mainframe computers and WANs. However, LANs now play an increasingly important role in most organizations but are often overlooked by central site network managers. • Always assume your opponent is smarter than you. • Always have insurance as the last resort should all controls fail.Hardware• Servers, such as mail servers, Web servers, DNS servers, DHCP servers, and LAN file servers • Client computers • Devices such as switches and routersCircuits• Locally operated circuits such as LANs and backbones • Contracted circuits such as WAN circuits • Internet access circuitsNetwork software• Server operating systems and system settings • Application software such as mail server and Web server softwareClient software• Operating systems and system settings • Application software such as word processorsOrganizational data• Databases with organizational recordsMission-critical applications• For example, for an Internet bank, its Web site is mission-criticalTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 304304 Chapter 11 Network Security The next most important type of asset is the organization’s data. For example, suppose someone were to destroy a mainframe computer worth $10 million. The computer could be replaced simply by buying a new one. It would be expensive, but the problem would be solved in a few weeks. Now suppose someone were to destroy all the student records at your university so that no one would know what courses anyone had taken or their grades. The cost would far exceed the cost of replacing a $10 million computer. The lawsuits alone would easily exceed $10 million, and the cost of staff to find and reenter paper records would be enormous and certainly would take more than a few weeks. Once all assets are identified, they need to be rated for importance. To order them, you need answer questions such as, what would happen if this information asset’s confidentiality, integrity, or accessibility were compromised? This will allow you to assess the importance of this asset as either low, medium, or high. You need also to document each asset, not just information assets, and briefly describe why each asset is critical to the organization. Finally, the owners of each asset are recorded. Figure 11-3 summarizes some typical assets found in most organizations.11.2.3 Identify Threats A threat is any potential occurrence that can do harm, interrupt the systems using the network, or cause a monetary loss to the organization. Figure 11-5 summarizes the most common types of threats and their likelihood of occurring based on several surveys in recent years. This figure shows the percentage of organizations affected each year by each threat but not whether the threat caused damage; for example, 100% of companies reported experiencing one or more viruses each year, but in most cases, the antivirus software prevented any problems. The actual probability of a threat to your organization depends on your business. An Internet bank, for example, is more likely to be a target of theft of information than a restaurant with a simple Web site. Nonetheless, Figure 11-5 provides some general guidance. The next step is to create threat scenarios. A threat scenario describes how an asset can be compromised by one specific threat. An asset can be compromised by more than one threat, so it is common to have more than one threat scenario for each asset. For example, the confidentiality, integrity, and/or availability of the client data database in Figure 11-4 can be compromised by information theft (confidentiality), sabotage (integrity), or a natural disaster such as a tornado (availability). When preparing a threat scenario, we name the asset, describe the threat, explain the consequence (violation of confidentiality, integrity or availability), and estimate the likelihood of this threat happening (high, medium, or low). Figure 11-6 provides an example of a threat scenario for one asset (the customer database) of a Web-based bookstore. The top half of the threat scenario describes the risk associated with the asset from the threat, while the bottom half (shaded in color) describes the existing controls that have been implemented to protect the asset from this threat. This step focuses on the top half of the threat scenario, whereas the next step (11.2.4) describes the bottom half. A threat scenario begins with the name of the asset and the threat being considered. The threat is described and the likelihood of its occurrence is assessed as high, medium, or low. Then the potential impact is identified, whether this be to confidentiality, integrity, or availability. Some threats could have multiple impacts. Next the consequences of the threat are assessed, using the impact areas identified in step 1 and their priority (e.g., reputation, financial, productivity, safety, and legal). We identify the impact that each scenario could have on each priority area, high, medium, or low, using the risk measurement criteria defined in step 1. We calculate an impact score by multiplying the priority of each area by the impact the threat would have, using a 1 for a low value, a 2 for a medium value, and a 3 for a high value, and summing all the results to produce an impact score.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 305Risk Assessment 305AssetMost Important Importance Security RequirementDescriptionOwner(s)Customer databaseHigh◾ Confidentiality ◾ Integrity ◾ AvailabilityThis database contains all customers’ records, VP of Marketing including address and credit card information. CIOWeb serverHigh◾ Confidentiality ◾ Integrity ◾ AvailabilityThis is used by our customers to place orders. It is CIO very important that it would be available 24/7.Mail serverMedium◾ Confidentiality ◾ Integrity ◾ AvailabilityThis is used by employees for internal communica- CIO tion. It is very important that no one intercepts this communication as sensitive information is shared via email.Financial recordsHigh◾ Confidentiality ◾ Integrity ◾ AvailabilityThese records are used by the C-level executives CFO and also by the VP of operations. It is imperative that nobody else but the C-team be able to access this mission information.Employees’ computersLow◾ Confidentiality ◾ Integrity ◾ AvailabilityEach employee is assigned to a cubical that has a Division directors desktop computer in it. Employees provide customer service and support for our Web site using these computers.FIGURE 11-4Sample inventory of assets for a Web-based bookstore Finally, we can calculate the relative risk score by multiplying the impact score by the likelihood (using 1 for low likelihood, 2 for medium likelihood, and 3 for high likelihood). Figure 11-6 shows that the risk score for information theft from the customer database is 50. The absolute number does not really tell us anything. Instead, we compare the risk scores among all the different threat scenarios to help us identify the most important risks we face. Figure 11-7 shows the threat scenario for a tornado strike against our customer database. Take a moment and compare the two threat scenarios. You can see that the tornado risk score is 14, which shows that information theft is a greater risk than a tornado.FIGURE 11-5 Likelihood of a threatPercent of Organizations Experiencing this Event each Year Virus Theft of Equipment Theft of Information Device Failure Natural Disaster Sabotage Denial of service 0%20%40%60%80%100%Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 306306 Chapter 11 Network Security AssetCustomer databaseAsset ImportanceHighThreatTheft of informationDescriptionAn external hacker or a disgruntled current or former employee can gain unauthorized access to the client data and distribute it to a third party.LikelihoodMedium (2) √Impact onConfidentiality Integrity AvailabilityImpact AreaPriorityImpactFinancialHigh (3)Medium (2)6ProductivityMedium (2)High (3)6ReputationHigh (3)High (3)9LegalMedium (2)Medium (2)4Impact Score Risk Score (Likelihood × Impact Score)50Adequacy of Existing ControlsMediumRisk Control Strategy___ AcceptScore25_√__ Mitigate ___ Share ____Defer Risk Mitigation Controls EncryptionThe database is encrypted.FirewallA firewall is installed on the router in front of the database to prevent unauthorized access.Personnel PolicyAll employees have their log-in credentials removed within 24 hours of their resignation or termination.TrainingEmployees have to attend annual security training that focuses on information disclosure policy, phishing and social engineering techniques to ensure they do not provide their passwords to anyone.Automatic screen lockEach employee’s computer will lock if the computer hasn’t been used for five minutes so that if an employee leaves his or her desk without logging off, someone else cannot gain unauthorized access to the employee's computer.FIGURE 11-6Threat scenario for theft of customer informationIn these examples, we have used only three values (high, medium, and low) to assess likelihood, priority, and impact. Some organizations use more complex scoring systems. And nothing says that likelihood, priority, and impact have to use the same scales. Some organizations use 5-point scales for priority, 7-point scales for impact, and 100-point scales for likelihood.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 307Risk Assessment 307 AssetClient databaseAsset ImportanceHighThreatNatural disaster (tornado)DescriptionOur data center could be hit by an F4 or F5 tornado that would destroy the databaseLikelihoodLow (1)Impact on____ Confidentiality ____Integrity_√ ___ Availability Impact AreaRankingImpactFinancialHigh (3)Low (1)3ProductivityMedium (2)High (3)6ReputationHigh (3)Low (1)3LegalMedium (2)Low (1)2Risk Score Risk Score (Likelihood × Impact Score)14Adequacy of Existing ControlsMediumRisk Control Strategy___ AcceptScore14_√ _ Mitigate ___ Share ____Defer Risk Mitigation ControlsDescriptionBackup of DatabaseEach night, the database will be copied to a second secure data center located 500 miles from the main data center.Disaster Recovery PlanA disaster recovery plan is in place and will be tested every two years to ensure that the database can be successfully restored to an alternate data center that can be operational within 48 hours.FIGURE 11-7 tornadoThreat scenario for destruction of customer information by a11.2.4 Document Existing Controls Once the specific assets, threat scenarios, and their risk scores have been identified, you can begin to work on the risk control strategy, which is the way an organization intends to address a risk. In general, an organization can accept the risk, mitigate it, share it, or defer it. If an organization decides to accept a risk, it means the organization will be taking no action to address it and accept the stated consequences. In general, these risks have very low impact on the organization. Risk mitigation involves implementation of some type of a control to counter the threat or to minimize the impact. An organization can implement several types of controls, suchTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 308308 Chapter 11 Network Security as using antivirus software, implementing state-of-the-art firewalls, or providing security training for employees. An organization can decide to share the risk. In this case, it purchases insurance against the risk. For example, you share a risk for getting into a car accident. It is quite unlikely that you will be in a car accident, but if it were to happen, you want to make sure that the insurance company can step in and pay for all the damages. Similarly, an organization may decide to purchase insurance against information theft or damage from a tornado. Sharing and mitigation can be done simultaneously. Finally, the organization can defer the risk. This usually happens when there is a need to collect additional information about the threat and the risk. These risks are usually not imminent and, if they were to occur, would not significantly impact the organization. For each threat scenario, the risk control strategy needs to be specified. If the organization decides to mitigate and/or share the risk, specific controls need to be listed. The next two sections in this chapter describe numerous controls that can be used to mitigate the security risks organizations face. Once the existing controls have been documented, an overall assessment of their adequacy is done. This assessment produces a value that is relative to the risk, such as high adequacy (meaning the controls are expected to strongly control the risks in the threat scenario), medium adequacy (meaning some improvements are possible), or low adequacy (meaning improvements are needed to effectively mitigate or share the risk). Once again, some organizations use more complex scales such as a letter grade (A, A−, A+, B, etc.) or 100-point scales. The bottom sections of the threat scenarios in Figures 11-6 and 11-7 show the strategy, controls, and their adequacy for both threat scenarios. For the theft of information, the Web-based bookstore has already implemented several risk mitigation strategies: encryption, a firewall, personnel policies, training, and automatic screen lock. For the tornado, the company implemented a database backup and a disaster recovery plan. Both have been assessed as medium adequacy. At this point, you may or may not understand the controls described in these figures. However, after you read the rest of the chapter, you will understand what each control is and how it works to mitigate the risk from the threat.11.2.5 Identify Improvements The final step in risk assessment—and its ultimate goal—is to identify what improvements are needed. Most organizations face so many threats that they cannot afford to mitigate all of them to the highest level. They need to focus first on the highest risks; the threat scenarios with the highest risk scores are carefully examined to ensure that there is at least a medium level of control adequacy. In addition, the most important assets’ security requirements (labeled as high in Figure 11-4) are adequately protected. Additional controls that could be implemented to improve the risk mitigation are considered, as are ways to share the risk. As mentioned earlier, Sections 11.3 and 11.4 describe many different controls that can be implemented to mitigate the risks associated with the loss of business continuity and unauthorized access. The second focus is on threat scenarios whose mitigation controls have low adequacy. Ideally, these will all be low-risk threats, but they are examined to ensure the level of expenditure matches the level of risk.11.3 ENSURING BUSINESS CONTINUITY Business continuity means that the organization’s data and applications will continue to operate even in the face of disruption, destruction, or disaster. A business continuity plan has two major parts: the development of controls that will prevent these events fromTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 309Ensuring Business Continuity 309having a major impact on the organization, and a disaster recovery plan that will enable the organization to recover if a disaster occurs. In this section, we discuss controls designed to prevent, detect, and correct these threats. We focus on the major threats to business continuity: viruses, theft, denial of service, attacks, device failure, and disasters. Business continuity planning is sometimes overlooked because intrusion is more often the subject of news reports.11.3.1 Virus Protection Special attention must be paid to preventing computer viruses. Some are harmless and just cause nuisance messages, but others are serious, such as by destroying data. In most cases, disruptions or the destruction of data are local and affect only a small number of computers. Such disruptions are usually fairly easy to deal with; the virus is removed and the network continues to operate. Some viruses cause widespread infection, although this has not occurred in recent years. Most viruses attach themselves to other programs or to special parts on disks. As those files execute or are accessed, the virus spreads. Macro viruses, viruses that are contained in documents, emails, or spreadsheet files, can spread when an infected file is simply opened. Some viruses change their appearances as they spread, making detection more difficult. A worm is special type of virus that spreads itself without human intervention. Many viruses attach themselves to a file and require a person to copy the file, but a worm copies itself from computer to computer. Worms spread when they install themselves on a computer and then send copies of themselves to other computers, sometimes by emails, sometimes via security holes in software. (Security holes are described later in this chapter.) The best way to prevent the spread of viruses is to install antivirus software such as that by Symantec. Most organizations automatically install antivirus software on theirMANAGEMENT11-2 Attack of the AuditorsFOCUSSecurity has become a major issue over the past few years. With the passage of HIPAA and the Sarbanes-Oxley Act, more and more regulations are addressing security. It takes years for most organizations to become compliant, because the rules are vague and there are many ways to meet the requirements. “If you’ve implemented commonsense security, you’re probably already in compliance from an IT standpoint,” says Kim Keanini, chief technology officer of nCricle, a security software firm. “Compliance from an auditing standpoint, however, is something else.” Auditors require documentation. It is no longer sufficient to put key network controls in place; now you have to provide documented proof that a control is working, which usually requires event logs of transactions and thwarted attacks.When it comes to security, Bill Randal, MIS director of Red Robin Restaurants, can’t stress enough the importance of documentation. “It’s what the auditors are really looking for,” he says. “They’re not IT folks, so they’re looking for documented processes they can track. At the start of our [security] compliance project, we literally stopped all other projects for another three weeks while we documented every security and auditing process we had in place.” Software vendors are scrambling not only to ensure that their security software performs the functions it is designed to do but also to improve its ability to provide documentation for auditors. Adapted from: Oliver Rist, “Attack of the Auditors,” InfoWorld, March 21, 2005, pp. 34–40.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 310310 Chapter 11 Network Security computers, but many people fail to install them on their home computers. Antivirus software is only as good as its last update, so it is critical that the software be updated regularly. Be sure to set your software to update automatically or do it manually on a regular basis. Viruses are often spread by downloading files from the Internet, so do not copy or download files of unknown origin (e.g., music, videos, screen savers), or at least check every file you do download. Always check all files for viruses before using them (even those from friends!). Researchers estimate that 10 new viruses are developed every day, so it is important to frequently update the virus information files that are provided by the antivirus software.11.3.2 Denial-of-Service Protection With a denial-of-service (DoS) attack, an attacker attempts to disrupt the network by flooding it with messages so that the network cannot process messages from normal users. The simplest approach is to flood a Web server, mail server, and so on, with incoming messages. The server attempts to respond to these, but there are so many messages that it cannot. One might expect that it would be possible to filter messages from one source IP so that if one user floods the network, the messages from this person can be filtered out before they reach the Web server being targeted. This could work, but most attackers use tools that enable them to put false source IP addresses on the incoming messages so that it is difficult to recognize a message as a real message or a DoS message. A distributed denial-of-service (DDoS) attack is even more disruptive. With a DDoS attack, the attacker breaks into and takes control of many computers on the Internet (often several hundred to several thousand) and plants software on them called a DDoS agent (or sometimes a zombie or a bot). The attacker then uses software called a DDoS handler (sometimes called a botnet) to control the agents. The handler issues instructions to the computers under the attacker’s control, which simultaneously begin sending messages to the target site. In this way, the target is deluged with messages from many different sources, making it harder to identify the DoS messages and greatly increasing the number of messages hitting the target (see Figure 11-8). Some DDos attacks have sent more than one million packets per second at the target. There are several approaches to preventing DoS and DDoS attacks from affecting the network. The first is to configure the main router that connects your network to the Internet (or the firewall, which will be discussed later in this chapter) to verify that the source address of all incoming messages is in a valid address range for that connection (called traffic filtering). For example, if an incoming message has a source address from inside your network, then it is obviously a false address. This ensures that only messages with valid addresses are permitted into the network, although it requires more processing in the router and thus slows incoming traffic. A second approach is to configure the main router (or firewall) to limit the number of incoming packets that could be DoS/DDoS attack packets that it allows to enter the network, regardless of their source (called traffic limiting). Technical Focus 11-2 describes some of the types of DoS/DDoS attacks and the packets used. Such packets have the same content as legitimate packets that should be permitted into the network. It is a flood of such packets that indicates a DoS/DDoS attack, so by discarding packets over a certain number that arrive each second, one can reduce the impact of the attack. The disadvantage is that during an attack, some valid packets from regular customers will be discarded, so they will be unable to reach your network. Thus the network will continue to operate, but some customer packets (e.g., Web requests, emails) will be lost.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 311Ensuring Business Continuity 311 Agents HandlerAgentsFIGURE 11-8A distributed denial-of-service attackA third and more sophisticated approach is to use a special-purpose security device, called a traffic anomaly detector, that is installed in front of the main router (or firewall) to perform traffic analysis. This device monitors normal traffic patterns and learns what normal traffic looks like. Most DoS/DDoS attacks target a specific server or device so when the anomaly detector recognizes a sudden burst of abnormally high traffic destined for MANAGEMENT11-3 DDoS Attacks for Hire?FOCUSAlthough the idea of DDoS is not new, they have increased by 1,000% since 2005, partially because you can now hire a hacker who will attack anyone you like for a fee. On hacker forums, hackers advertise their ability to take Web sites down. All you need to do is to reach them via a message on this forum and negotiate the fee. DDoS attacks are also used as a test for hackers wanting to join these hacker groups. The leader of a hacker group will give a target Web site to an aspiring member, and the hacker has to prove that he or she can bring the Web site down. The target Web sites are selected basedon the security measures they have to protect themselves against attacks, so this task can be simple or quite complex based on the test target selected. DDoS attacks are here to stay because they are no longer a hobby but a source of income for cybercriminals. Attackers are now able to bombard a target at 300+ Gbps, which is six times the size of the largest attack in 2009. Adapted from: “The New Normal: 200–400 Gbps DDoS Attacks,” posted February 14, 2014, on Krebs on Security (krebsonsecurity.com).Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 312312 Chapter 11 Network SecurityISP RouterNormal TrafficInbound Traffic DetectorRouterNormal TrafficQuarantined TrafficOrganization’s NetworkReleased Traffic Re-Routed TrafficFIGURE 11-9AnalyzerTraffic analysis reduces the impact of denial of service attacksa specific server or device, it quarantines those incoming packets but allows normal traffic to flow through into the network. This results in minimal impact to the network as a whole. The anomaly detector reroutes the quarantined packets to a traffic anomaly analyzer (see Figure 11-9). The anomaly analyzer examines the quarantined traffic, attempts to recognize valid source addresses and “normal” traffic, and selects which of the quarantined packets to release into the network. The detector can also inform the router owned by the ISP that is sending the traffic into the organization’s network to reroute the suspect traffic to the anomaly analyzer, thus avoiding the main circuit leading into the organization. This process is never perfect, but it is significantly better than the other approaches.TECHNICALsss11-2 Inside a DoS AttackFOCUS A DoS attack typically involves the misuse of standard TCP/IP protocols or connection processes so that the target for the DoS attack responds in a way designed to create maximum trouble. Five common types of attacks include the following: • ICMP Attacks The network is flooded with ICMP echo requests (i.e., pings) that have a broadcast destination address and a faked source address of the intended target. Because it is a broadcast message, every computer on the network responds to the faked source address so that the target is overwhelmed by responses. Because there are often dozens of computers in the same broadcast domain, each message generates dozens of messages at the target. • UDP Attacks This attack is similar to an ICMP attack, except that it uses UDP echo requests instead of ICMP echo requests. • TCP SYN Floods The target is swamped with repeated SYN requests to establish a TCP connection, but when the target responds (usually to a faked source address), there is no response. The target continues to allocate TCP control blocks,expects each of the requests to be completed, and gradually runs out of memory. • UNIX Process Table Attacks This is similar to a TCP SYN flood, but instead of TCP SYN packets, the target is swamped by UNIX open connection requests that are never completed. The target allocates open connections and gradually runs out of memory. • Finger of Death Attacks This is similar to the TCP SYN flood, but instead the target is swamped by finger requests that are never disconnected. • DNS Recursion Attacks The attacker sends DNS requests to DNS servers (often within the target’s network) but spoofs the from address so the requests appear to come from the target computer that is overwhelmed by DNS responses. DNS responses are larger packets than ICMP, UDP, or SYN responses, so the effects can be stronger.Adapted from: “Web Site Security and Denial of Service Protection,” www.nwfusion.com.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 313Ensuring Business Continuity 313Another possibility under discussion by the Internet community as a whole is to require Internet Service Providers (ISPs) to verify that all incoming messages they receive from their customers have valid source IP addresses. This would prevent the use of faked IP addresses and enable users to easily filter out DoS messages from a given address. It would make it virtually impossible for a DoS attack to succeed and much harder for a DDoS attack to succeed. Because small- to medium-sized businesses often have poor security and become the unwilling accomplices in DDoS attacks, many ISPs are beginning to impose security restrictions on them, such as requiring firewalls to prevent unauthorized access (firewalls are discussed later in this chapter).11.3.3 Theft Protection One often overlooked security risk is theft. Computers and network equipment are commonplace items that have a good resale value. Several industry sources estimate that more than $1 billion is lost to computer theft each year, with many of the stolen items ending up on Internet auction sites (e.g., eBay). Physical security is a key component of theft protection. Most organizations require anyone entering their offices to go through some level of physical security. For example, most offices have security guards and require all visitors to be authorized by an organization employee. Universities are one of the few organizations that permit anyone to enter their facilities without verification. Therefore, you’ll see most computer equipment and network devices protected by locked doors or security cables so that someone cannot easily steal them. One of the most common targets for theft is laptop computers. More laptop computers are stolen from employee’s homes, cars, and hotel rooms than any other device. Airports are another common place for laptop thefts. It is hard to provide physical security for traveling employees, but most organizations provide regular reminders to their employees to take special care when traveling with laptops. Nonetheless, they are still the most commonly stolen device.11.3.4 Device Failure Protection Eventually, every computer network device, cable, or leased circuit will fail. It’s just a matter of time. Some computers, devices, cables, and circuits are more reliable than others, but every network manager has to be prepared for a failure. The best way to prevent a failure from impacting business continuity is to build redundancy into the network. For any network component that would have a major impact on business continuity, the network designer provides a second, redundant component. For example, if the Internet connection is important to the organization, the network designer ensures that there are at least two connections into the Internet—each provided by a different common carrier, so that if one common carrier’s network goes down, the organization can still reach the Internet via the other common carrier’s network. This means, of course, that the organization now requires two routers to connect to the Internet, because there is little use in having two Internet connections if they both run through the same router; if that one router goes down, having a second Internet connection provides no value. This same design principle applies to the organization’s internal networks. If the core backbone is important (and it usually is), then the organization must have two core backbones, each served by different devices. Each distribution backbone that connects to the core backbone (e.g., a building backbone that connects to a campus backbone) must also have two connections (and two routers) into the core backbone.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 314314 Chapter 11 Network Security The next logical step is to ensure that each access layer LAN also has two connections into the distribution backbone. Redundancy can be expensive, so at some point, most organizations decide that not all parts of the network need to be protected. Most organizations build redundancy into their core backbone and their Internet connections but are very careful in choosing which distribution backbones (i.e., building backbones) and access layer LANs will have redundancy. Only those building backbones and access LANs that are truly important will have redundancy. This is why a risk assessment is important, because it is too expensive to protect the entire network. Most organizations only provide redundancy in mission-critical backbones and LANs (e.g., those that lead to servers). Redundancy also applies to servers. Most organizations use a server farm, rather than a single server, so that if one server fails, the other servers in the server farm continue to operate and there is little impact. Some organizations use fault-tolerant servers that contain many redundant components so that if one of its components fails, it will continue to operate. Redundant array of independent disks (RAID) is a storage technology that, as the name suggests, is made of many separate disk drives. When a file is written to a RAID device, it is written across several separate, redundant disks. There are several types of RAID. RAID 0 uses multiple disk drives and therefore is faster than traditional storage, because the data can be written or read in parallel across several disks, rather than sequentially on the same disk. RAID 1 writes duplicate copies of all data on at least two different disks; this means that if one disk in the RAID array fails, there is no data loss because there is a second copy of the data stored on a different disk. This is sometimes called disk mirroring, because the data on one disk is copied (or mirrored) onto another. RAID 2 provides error checking to ensure no errors have occurred during the reading or writing process. RAID 3 provides a better and faster error checking process than RAID 2. RAID 4 provides slightly faster read access than RAID 3 because of the way it allocates the data to different disk drives. RAID 5 provides slightly faster read and write access because of the way it allocates the error checking data to different disk drives. RAID 6 can survive the failure of two drives with no data loss. Power outages are one of the most common causes of network failures. An uninterruptable power supply (UPS) is a device that detects power failures and permits the devices attached to it to operate as long as its battery lasts. UPS for home use are inexpensive and often provide power for up to 15 minutes—long enough for you to save your work and shut down your computer. UPS for large organizations often have batteries that last for an hour and permit mission-critical servers, switches, and routers to operate until the organization’s backup generator can be activated.11.3.5 Disaster Protection A disaster is an event that destroys a large part of the network and computing infrastructure in one part of the organization. Disasters are usually caused by natural forces (e.g., hurricanes, floods, earthquakes, fires), but some can be humanmade (e.g., arson, bombs, terrorism). Avoiding Disaster Ideally, you want to avoid a disaster, which can be difficult. For example, how do you avoid an earthquake? There are, however, some commonsense steps you can take to avoid the full impact of a disaster from affecting your network. The most fundamental is again redundancy; store critical data in at least two very different places, so if a disaster hits one place, your data are still safe. Other steps depend on the disaster to be avoided. For example, to avoid the impact of a flood, key network components and data should never be located near rivers or in theTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 315Ensuring Business Continuity 315MANAGEMENT11-4 Recovering from KatrinaFOCUSAlthough natural disasters don’t happen frequently, people remember them long after their time. The last natural disaster to have been categorized among the 10 worst disasters of the last 101 years is Katrina. This Category 5 hurricane caused terrifying damage but also allowed us to better prepare for future natural disasters. As Hurricane Katrina swept over New Orleans, Ochsner Hospital lost two of its three backup power generators, knocking out air-conditioning in the 95-degree heat. Fans were brought out to cool patients, but temperatures inside critical computer and networking equipment reached 150 degrees. Kurt Induni, the hospital’s network manager, shut down part of the network and the mainframe with its critical patient records system to ensure they survived the storm. The hospital returned to paper-based record keeping, but Induni managed to keep email alive, which became critical when the telephone system failed and a main fiber line was cut. E-mail through the hospital’s T-3 line into Baton Rouge became the only reliable means of communication. After the storm, the mainframe was turned back on and the patient records were updated. While Ochsner Hospital remained open, Kindred Hospital was forced to evacuate patients (under militaryprotection from looters and snipers). The patients’ files, all electronic, were simply transferred over the network to other hospitals with no worry about lost records, X-rays, CT scans, and such. In contrast, the Louisiana court system learned a hard lesson. The court system is administered by each individual parish (i.e., county) and not every parish had a disaster recovery plan or even backups of key documents–many parishes still used old paper files that were destroyed by the storm. “We’ve got people in jails all over the state right now that have no paperwork and we have no way to offer them any kind of means for adjudication,” said Freddie Manit, CIO for the Louisiana Ninth Judicial District Court. No paperwork means no prosecution, even for felons with long records, so many prisoners would simply be released. Sometimes losing data is not the worst thing that can happen. Adapted from: http://www.popularmechanics.com/science/ environment/natural-disasters/4219861; Phil Hochmuth, “Weathering Katrina,” NetworkWorld, September 19, 2005, pp. 1, 20; and M. K. McGee, “Storm Shows Benefits, Failures of Technology,” Information week, September 15, 2005, p. 34.basement of a building. To avoid the impact of a tornado, key network components and data should be located underground. To reduce the impact of fire, a fire suppression system should be installed in all key data centers. To reduce the impact of terrorist activities, the location of key network components and data should be kept a secret and should be protected by security guards. Disaster Recovery A critical element in correcting problems from a disaster is the disaster recovery plan, which should address various levels of response to a number of possible disasters and should provide for partial or complete recovery of all data, application software, network components, and physical facilities. A complete disaster recovery plan covering all these areas is beyond the scope of this text. Figure 11-10 provides a summary of many key issues. A good example of a disaster recovery plan is MIT’s business continuity plan at web.mit.edu/security/www/pubplan.htm. Some firms prefer the term business continuity plan. The most important elements of the disaster recovery plan are backup and recovery controls that enable the organization to recover its data and restart its application software should some portion of the network fail. The simplest approach is to make backup copies of all organizational data and software routinely and to store these backup copies off-site.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 316316 Chapter 11 Network Security FIGURE 11-10 Elements of a disaster recovery planA good disaster recovery plan should include the following: • The name of the decision-making manager who is in charge of the disaster recovery operation. A second manager should be indicated in case the first manager is unavailable. • Staff assignments and responsibilities during the disaster. • A preestablished list of priorities that states what is to be fixed first. • Location of alternative facilities operated by the company or a professional disaster recovery firm and procedures for switching operations to those facilities using backups of data and software. • Recovery procedures for the data communication facilities (backbone network, metropolitan area network, wide area network, and local area network), servers, and application systems. This includes information on the location of circuits and devices, whom to contact for information, and the support that can be expected from vendors, along with the name and telephone number of the person at each vendor to contact. • Action to be taken in case of partial damage or threats such as bomb threats, fire, water or electrical damage, sabotage, civil disorders, and vendor failures. • Manual processes to be used until the network is functional. • Prodecures to ensure adequate updating and testing of the disaster recovery plan. • Storage of the data, software, and the disaster recovery plan itself in a safe area where they cannot be destroyed by a catastrophe. This area must be accessible, however, to those who need to use the plan.Most organizations make daily backups of all critical information, with less important information (e.g., email files) backed up weekly. Backups used to be done on tapes that were physically shipped to an off-site location, but more and more, companies are using their WAN connections to transfer data to remote locations (it’s faster and cheaper than moving tapes). Backups should always be encrypted (encryption is discussed later in the chapter) to ensure that no unauthorized users can access them. Continuous data protection (CDP) is another option that firms are using in addition to or instead of regular backups. With CDP, copies of all data and transactions on selected servers are written to CDP servers as the transaction occurs. CDP is more flexible than traditional backups that take snapshots of data at specific times or than disk mirroring, which duplicates the contents of a disk from second to second. CDP enables data to be stored miles from the originating server and time-stamps all transactions to enable organizations to restore data to any specific point in time. For example, suppose a virus brings down a server at 2:45 P.M. The network manager can restore the server to the state it was in at 2:30 p.m. and simply resume operations as though the virus had not hit. Backups and CDP ensure that important data are safe, but they do not guarantee the data can be used. The disaster recovery plan should include a documented and tested approach to recovery. The recovery plan should have specific goals for different types of disasters. For example, if the main database server was destroyed, how long should it take the organization to have the software and data back in operation by using the backups? Conversely, if the main data center was completely destroyed, how long should it take? The answers to these questions have very different implications for costs. Having a spare network server or a server with extra capacity that can be used in the event of the loss of the primary server is one thing. Having a spare data center ready to operate within 12 hours (for example) is an entirely different proposition. Many organizations have a disaster recovery plan, but only a few test their plans. A disaster recovery drill is much like a fire drill in that it tests the disaster recovery planTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 317Ensuring Business Continuity 317and provides staff the opportunity to practice little-used skills to see what works and what doesn’t work before a disaster happens and the staff must use the plan for real. Without regular disaster recovery drills, the only time a plan is tested is when it must be used. For example, when an island-wide blackout shut down all power in Bermuda, the backup generator in the British Caymanian Insurance office automatically took over and kept the company operating. However, the key-card security system, which was not on the generator, shut down, locking out all employees and forcing them to spend the day at the beach. No one had thought about the security system and the plan had not been tested. Organizations are usually much better at backing up important data than are individual users. When did you last back up the data on your computer? What would you do if your computer was stolen or destroyed? There is an inexpensive alternative to CDP for home users. Online backup services such as mozy.com enable you to back up the data on your computer to their server on the Internet. You download and install client software that enables you to select what folders to back up. After you back up the data for the first time, which takes a while, the software will run every few hours and automatically back up all changes to the server, so you never have to think about backups again. If you need to recover some or all of your data, you can go to their Web site and download it.MANAGEMENT11-5 Disaster Recovery Hits HomeFOCUS“The building is on fire” were the first words she said as I answered the phone. It was just before noon and one of my students had called me from her office on the top floor of the business school at the University of Georgia. The roofing contractor had just started what would turn out to be the worst fire in the region in more than 20 years, although we didn’t know it then. I had enough time to gather up the really important things from my office on the ground floor (memorabilia, awards, and pictures from 10 years in academia) when the fire alarm went off. I didn’t bother with the computer; all the files were backed up off-site. Ten hours, 100 firefighters, and 1.5 million gallons of water later, the fire was out. Then our work began. The fire had completely destroyed the top floor of the building, including my 20-computer networking lab. Water had severely damaged the rest of the building, including my office, which, I learned later, had been flooded by almost 2 feet of water at the height of the fire. My computer, and virtually all the computers in the building, were damaged by the water and unusable. My personal files were unaffected by the loss of the computer in my office; I simply used the backups and continued working—after making new backups and giving them to a friend to store at his house. The Web server I managed had been backed up to anotherserver on the opposite side of campus 2 days before (on its usual weekly backup cycle), so we had lost only 2 days’ worth of changes. In less than 24 hours, our Web site was operational; I had our server’s files mounted on the university library’s Web server and redirected the university’s DNS server to route traffic from our old server address to our new temporary home. Unfortunately, the rest of our network did not fare as well. Our primary Web server had been backed up to tape the night before, and though the tapes were stored off-site, the tape drive was not; the tape drive was destroyed and no one else on campus had one that could read our tapes; it took 5 days to get a replacement and reestablish the Web site. Within 30 days we were operating from temporary offices with a new network, and 90% of the office computers and their data had been successfully recovered. Living through a fire changes a person. I’m more careful now about backing up my files, and I move ever so much more quickly when a fire alarm sounds. But I still can’t get used to the rust that is slowly growing on my “recovered” computer. Source: Alan DennisTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 318318 Chapter 11 Network Security Disaster Recovery Outsourcing Most large organizations have a two-level disaster recovery plan. When they build networks, they build enough capacity and have enough spare equipment to recover from a minor disaster such as loss of a major server or a portion of the network (if any such disaster can truly be called minor). This is the first level. Building a network that has sufficient capacity to quickly recover from a major disaster such as the loss of an entire data center is beyond the resources of most firms. Therefore, most large organizations rely on professional disaster recovery firms to provide this second-level support for major disasters. Many large firms outsource their disaster recovery efforts by hiring disaster recovery firms that provide a wide range of services. At the simplest, disaster recovery firms provide secure storage for backups. Full services include a complete networked data center that clients can use when they experience a disaster. Once a company declares a disaster, the disaster recovery firm immediately begins recovery operations using the backups stored on site and can have the organization’s entire data network back in operation on the disaster recovery firm’s computer systems within hours. Full services are not cheap, but compared to the potential millions of dollars that can be lost per day from the inability to access critical data and application systems, these systems quickly pay for themselves in time of disaster.11.4 INTRUSION PREVENTION Intrusion is the second main type of security problem and the one that tends to receive the most attention. No one wants an intruder breaking into his or her network. Four types of intruders may attempt to gain unauthorized access to computer networks. The first are casual intruders who have only a limited knowledge of computer security. They simply cruise along the Internet trying to access any computer they come across. Their unsophisticated techniques are the equivalent of trying doorknobs, and, until recently, only those networks that left their front doors unlocked were at risk. Unfortunately, a variety of hacking tools are now available on the Internet that enable even novices to launch sophisticated intrusion attempts. Novice attackers who use such tools are sometimes called script kiddies. The second type of intruders are experts in security, but their motivation is the thrill of the hunt. They break into computer networks because they enjoy the challenge and enjoy showing off for friends or embarrassing the network owners. These intruders are called hackers and often have a strong philosophy against ownership of data and software. Most cause little damage and make little attempt to profit from their exploits, but those who do can cause major problems. Hackers who cause damage are often called crackers. The third type of intruder is the most dangerous. They are professional hackers who break into corporate or government computers for specific purposes, such as espionage, fraud, or intentional destruction. The U.S. Department of Defense (DoD), which routinely monitors attacks against U.S. military targets, has until recently concluded that most attacks are individuals or small groups of hackers in the first two categories. While some of their attacks have been embarrassing (e.g., defacement of some military and intelligence Web sites), there have been no serious security risks. However, in the late 1990s, the DoD noticed a small but growing set of intentional attacks that they classify as exercises, exploratory attacks designed to test the effectiveness of certain software attack weapons. Therefore, they established an information warfare program and a new organization responsible for coordinating the defense of military networks under the U.S. Space Command.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 319Intrusion Prevention 319The fourth type of intruder is also very dangerous. These are organization employees who have legitimate access to the network but who gain access to information they are not authorized to use. This information could be used for their own personnel gain, sold to competitors, or fraudulently changed to give the employee extra income. Many security break-ins are caused by this type of intruder. The key principle in preventing intrusion is to be proactive. This means routinely testing your security systems before an intruder does. Many steps can be taken to prevent intrusion and unauthorized access to organizational data and networks, but no network is completely safe. The best rule for high security is to do what the military does: Do not keep extremely sensitive data online. Data that need special security are stored in computers isolated from other networks. In the following sections, we discuss the most important security controls for preventing intrusion and for recovering from intrusion when it occurs.11.4.1 Security Policy In the same way that a disaster recovery plan is critical to controlling risks due to disruption, destruction, and disaster, a security policy is critical to controlling risk due to intrusion. The security policy should clearly define the important assets to be safeguarded and the important controls needed to do that. It should have a section devoted to what employees should and should not do. Also, it should contain a clear plan for routinely training employees—particularly end-users with little computer expertise—on key security rules and a clear plan for routinely testing and improving the security controls in place (Figure 11-11). A good set of examples and templates is available at www.sans.org/resources/policies.11.4.2 Perimeter Security and Firewalls Ideally, you want to stop external intruders at the perimeter of your network so that they cannot reach the servers inside. There are three basic access points into most networks: the FIGURE 11-11 Elements of a security policyA good security policy should include the following: • The name of the decision-making manager who is in charge of security • An incident reporting system and a rapid-response team to respond to security breaches in progress • A risk assessment with priorities as to which assets are most important • Effective controls placed at all major access points into the network to prevent or deter access by external agents • Effective controls placed within the network to ensure that internal users cannot exceed their authorized access • Use of minimum number of controls possible to reduce management time and to provide the least inconvenience to users • An acceptable use policy that explains to users what they can and cannot do, including guidelines for accessing others' accounts, password security, email rules, and so on • A procedure for monitoring changes to important network components (e.g., routers, DNS servers) • A plan to routinely train users regarding security policies and build awareness of security risks • A plan to routinely test and update all security controls that includes monitoring of popular press and vendor reports of security holes • An annual audit and review of the security practicesTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 320320 Chapter 11 Network Security FIGURE 11-12 Using a firewall to protect networksFirewall InternetOrganization's backbone networkInternet, LANS, and WLANs. Recent surveys suggest that the most common access point for intrusion is the Internet connection (70% of organizations experienced an attack from the Internet), followed by LANs and WLANs (30%). External intruders are most likely to use the Internet connection, whereas internal intruders are most likely to use the LAN or WLAN. Because the Internet is the most common source of intrusions, the focus of perimeter security is usually on the Internet connection, although physical security is also important. A firewall is commonly used to secure an organization’s Internet connection. A firewall is a router or special-purpose device that examines packets flowing into and out of a network and restricts access to the organization’s network. The network is designed so that a firewall is placed on every network connection between the organization and the Internet (Figure 11-12). No access is permitted except through the firewall. Some firewalls have the ability to detect and prevent denial-of-service attacks as well as unauthorized access attempts. Three commonly used types of firewalls are packet-level firewalls, application-level firewalls, and NAT firewalls. Packet-Level Firewalls A packet-level firewall examines the source and destination address of every network packet that passes through it. It only allows packets into or out of the organization’s networks that have acceptable source and destination addresses. In general, the addresses are examined only at the transport layer (TCP port ID) and network layer (IP address). Each packet is examined individually, so the firewall has no knowledge of what packets came before. It simply chooses to permit entry or exit based on the contents of the packet itself. This type of firewall is the simplest and least secure because it does not monitor the contents of the packets or why they are being transmitted and typically does not log the packets for later analysis. The network manager writes a set of rules (called an access control list [ACL]) for the packet-level firewall so it knows what packets to permit into the network and what packets to deny entry. Remember that the IP packet contains the source IP address and the destination address and that the TCP segment has the destination port number that identifies the application-layer software to which the packet is going. Most application layer software on servers uses standard TCP port numbers. The Web (HTTP) uses port 80, whereas email (SMTP) uses port 25. Suppose that the organization had a public Web server with an IP address of 128.192.44.44 and an email server with an address of 128.192.44.45 (see Figure 11-13). The network manager wants to make sure that no one outside of the organization can change the contents of the Web server (e.g., by using telnet or FTP). The ACL could be written to include a rule that permits the Web server to receive HTTP packets from the Internet (but other types of packets would be discarded). For example, the rule would say if the source address is anything, the destination IP address is 128.192.44.44, and the destination TCP port is 80, then permit the packet into the network; see the ACL on the firewall in Figure 11-13. Likewise, we could add a rule to the ACL that would permit SMTP packets to reach the email server: If the source address is anything, the destination is 128.192.44.45 and the destination TCP port is 25, then permit the packet through (see Figure 11-13). The last line in the ACL is usually a rule that says to deny entry to all other packets that haveTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 321Intrusion Prevention 321 Permitted Traffic SourceDestination Port192.168.34.121 128.192.44.44 80Inbound TrafficPacket Level FirewallSourceDestination Port102.18.55.33 128.192.44.45 25ISPOrganization’s Network Web Server 128.192.44.44 Mail Server 128.192.44.45SourceDestination Port192.168.44.122 128.192.44.44 23Access Control List Permit TCP any 128.192.44.44 80 Permit TCP any 128.192.44.45 25 Deny IP any anyFIGURE 11-13Discarded TrafficHow packet-level firewalls worknot been specifically permitted (some firewalls come automatically configured to deny all packets other than those explicitly permitted, so this command would not be needed). With this ACL, if an external intruder attempted to use telnet (port 23) to reach the Web server, the firewall would deny entry to the packet and simply discard it. Although source IP addresses can be used in the ACL, they often are not used. Most hackers have software that can change the source IP address on the packets they send (called IP spoofing), so using the source IP address in security rules is not usually worth the effort. Some network managers do routinely include a rule in the ACL that denies entry to all packets coming from the Internet that have a source IP address of a subnet inside the organization, because any such packets must have a spoofed address and therefore obviously are an intrusion attempt. Application-Level Firewalls An application-level firewall is more expensive and more complicated to install and manage than a packet-level firewall, because it examines the contents of the application-level packet and searches for known attacks (see Security Holes later in this chapter). Application-layer firewalls have rules for each application they can process. For example, most application-level firewalls can check Web packets (HTTP), email packets (SMTP), and other common protocols. In some cases, special rules must be written by the organization to permit the use of application software it has developed. Remember from Chapter 5 that TCP uses connection-oriented messaging in which a client first establishes a connection with a server before beginning to exchange data. Application-level firewalls can use stateful inspection, which means that they monitor and record the status of each connection and can use this information in making decisions about what packets to discard as security threats. Many application-level firewalls prohibit external users from uploading executable files. In this way, intruders (or authorized users) cannot modify any software unless they have physical access to the firewall. Some refuse changes to their software unless it is done by the vendor. Others also actively monitor their own software and automatically disable outside connections if they detect any changes.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 322322 Chapter 11 Network Security Network Address Translation Firewalls Network address translation (NAT) is the process of converting between one set of public IP addresses that are viewable from the Internet and a second set of private IP addresses that are hidden from people outside of the organization. NAT is transparent, in that no computer knows it is happening. Although NAT can be done for several reasons, the most common reasons are IPv4 address conservation and security. If external intruders on the Internet can’t see the private IP addresses inside your organization, they can’t attack your computers. Most routers and firewalls today have NAT built into them, even inexpensive routers designed for home use. The NAT firewall uses an address table to translate the private IP addresses used inside the organization into proxy IP addresses used on the Internet. When a computer inside the organization accesses a computer on the Internet, the firewall changes the source IP address in the outgoing IP packet to its own address. It also sets the source port number in the TCP segment to a unique number that it uses as an index into its address table to find the IP address of the actual sending computer in the organization’s internal network. When the external computer responds to the request, it addresses the message to the firewall’s IP address. The firewall receives the incoming message, and after ensuring the packet should be permitted inside, changes the destination IP address to the private IP address of the internal computer and changes the TCP port number to the correct port number before transmitting it on the internal network. This way systems outside the organization never see the actual internal IP addresses, and thus they think there is only one computer on the internal network. Most organizations also increase security by using private internal addresses. For example, if the organization has been assigned the Internet 128.192.55.X address domain, the NAT firewall would be assigned an address such as 128.192.55.1. Internal computers, however, would not be assigned addresses in the 128.192.55.X subnet. Instead, they would be assigned unauthorized Internet addresses such as 10.3.3.55 (addresses in the 10.X.X.X domain are not assigned to organizations but instead are reserved for use by private intranets). Because these internal addresses are never used on the Internet but are always converted by the firewall, this poses no problems for the users. Even if attackers discover the actual internal IP address, it would be impossible for them to reach the internal address from the Internet because the addresses could not be used to reach the organization’s computers. Firewall Architecture Many organizations use layers of NAT, packet-level, and application-level firewalls (Figure 11-14). Packet-level firewalls are used as an initial screen from the Internet into a network devoted solely to servers intended to provide public access (e.g., Web servers, public DNS servers). This network is sometimes called the DMZ (demilitarized zone) because it contains the organization’s servers but does not provide complete security for them. This packet-level firewall will permit Web requests and similar access to the DMZ network servers but will deny FTP access to these servers from the Internet because no one except internal users should have the right to modify the servers. Each major portion of the organization’s internal networks has its own NAT firewall to grant (or deny) access based on rules established by that part of the organization. This figure also shows how a packet sent by a client computer inside one of the internal networks protected by a NAT firewall would flow through the network. The packet created by the client has the client’s source address and the source port number of the process on the client that generated the packet (an HTTP packet going to a Web server, as you can tell from the destination port address of 80). When the packet reaches the firewall, the firewall changes the source address on the IP packet to its own address and changes the source port number to an index it will use to identify the client computer’s address and port number. The destination address and port number are unchanged. The firewall then sends the packetTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 323Intrusion Prevention 323Packet Level FirewallDMZISP DNS Server 128.192.44.1 SourcePortDestination128.192.44.50 3210 133.192.44.45NAT Firewall 128.192.44.50Mail Server Web Server 128.192.44.44 128.192.44.45Port 80NAT Firewall 128.192.44.51Internal Network CNAT FIrewall 128.192.44.52Internal Network BInternal Network CClient 192.168.1.100 SourcePortDestination128.168.1.100 4550 133.192.44.45FIGURE 11-14Port 80A typical network design using firewallson its way to the destination. When the destination Web server responds to this packet, it will respond using the firewall’s address and port number. When the firewall receives the incoming packets, it will use the destination port number to identify what IP address and port number to use inside the internal network, change the inbound packet’s destination and port number, and send it into the internal network so it reaches the client computer. Physical Security One important element to prevent unauthorized users from accessing an internal LAN is physical security: preventing outsiders from gaining access into the organization’s offices, server room, or network equipment facilities. Both main and remote physical facilities should be secured adequately and have the proper controls. Good security requires implementing the proper access controls so that only authorized personnel can enter closed areas where servers and network equipment are located or access the network. The network components themselves also have a level of physical security. Computers can have locks on their power switches or passwords that disable the screen and keyboard. In the previous section we discussed the importance of locating backups and servers at separate (off-site) locations. Some companies have also argued that by having many servers in different locations, you can reduce your risk and improve business continuity. DoesTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 324324 Chapter 11 Network Security having many servers disperse risk, or does it increase the points of vulnerability? A clear disaster recovery plan with an off-site backup and server facility can disperse risk, like distributed server systems. Distributed servers offer many more physical vulnerabilities to an attacker: more machines to guard, upgrade, patch, and defend. Many times these dispersed machines are all part of the same logical domain, which means that breaking into one of them often can give the attacker access to the resources of the others. It is our feeling that a well-backed-up, centralized data center can be made inherently more secure than a proliferated base of servers. Proper security education, background checks, and the implementation of error and fraud controls are also very important. In many cases, the simplest means to gain access is to become employed as a janitor and access the network at night. In some ways this is easier than the previous methods because the intruder only has to insert a listening device or computer into the organization’s network to record messages. Three areas are vulnerable to this type of unauthorized access: wireless LANs, network cabling, and network devices. Wireless LANs are the easiest target for eavesdropping because they often reach beyond the physical walls of the organization. Chapter 7 discussed the techniques of WLAN security, so we do not repeat them here. Network cables are the next easiest target for eavesdropping because they often run long distances and usually are not regularly checked for tampering. The cables owned by the organization and installed within its facility are usually the first choice for eavesdropping. It is 100 times easier to tap a local cable than it is to tap an interexchange channel because it is extremely difficult to identify the specific circuits belonging to any one organization in a highly multiplexed switched interexchange circuit operated by a common carrier. Local cables should be secured behind walls and above ceilings, and telephone equipment and switching rooms (wiring closets) should be locked and their doors equipped with alarms. The primary goal is to control physical access by employees or vendors to the connector cables and modems. This includes restricting their access to the wiring closets in which all the communication wires and cables are connected.TECHNICAL11-3 Data Security Requires Physical SecurityFOCUS The general consensus is that if someone can physically get to your server for some period of time, then all of your information on the computer (except perhaps strongly encrypted data) is available to the attacker. With a Windows server, the attacker simply boots the computer from the CD drive with a Knoppix version of Linux. (Knoppix is Linux on a CD.) If the computer won’t boot from the CD, the attacker simply changes the BIOSto make it boot from the CD. Knoppix finds all the drivers for the specific computer and gives you a Linux desktop that can fully read all of the NTFS or FAT32 files. But what about Windows password access? Nothing to it. Knoppix completely bypasses it. The attacker can then read, copy, or transmit any of the files on the Windows machine. Similar attacks are also possible on a Linux or Unix server, but they are slightly more difficult.Certain types of cable can impair or increase security by making eavesdropping easier or more difficult. Obviously, any wireless network is at extreme risk for eavesdropping because anyone in the area of the transmission can easily install devices to monitor the radio or infrared signals. Conversely, fiber-optic cables are harder to tap, thus increasing security. Some companies offer armored cable that is virtually impossible to cut without special tools.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 325Intrusion Prevention 325Other cables have built-in alarm systems. The U.S. Air Force, for example, uses pressurized cables that are filled with gas. If the cable is cut, the gas escapes, pressure drops, and an alarm is sounded. Network devices such as switches and routers should be secured in a locked wiring closet. As discussed in Chapter 7, all messages within a given local area network are actually received by all computers on the WLAN, although they only process those messages addressed to them. It is rather simple to install a sniffer program that records all messages received for later (unauthorized) analysis. A computer with a sniffer program could then be plugged into an unattended switch to eavesdrop on all message traffic. A secure switch makes this type of eavesdropping more difficult by requiring a special authorization code to be entered before new computers can be added.11.4.3 Server and Client Protection Security Holes Even with physical security and firewalls, the servers and client computers on a network may not be safe because of security holes. A security hole is simply a bug that permits unauthorized access. Many commonly used operating systems have major security holes well known to potential intruders. Many security holes have been documented and “patches” are available from vendors to fix them, but network managers may be unaware of all the holes or simply forget to update their systems with new patches regularly. A complete discussion of security holes is beyond the scope of this book. Many security holes are highly technical; for example, sending a message designed to overflow a memory buffer, thereby placing a short command into a very specific memory area that performs some function. Others are rather simple, but not obvious. For example, the attacker sends a message that lists the server’s address as both the sender and the destination, so the server repeatedly sends messages to itself until it crashes. sssMANAGEMENT11-6 Fake Antivirus?FOCUSThe world of computer viruses is constantly evolving and becoming more and more advanced. At the beginning of Internet, viruses were designed to do funny things (such as turn text on your screen upside down), but today they are designed to get your money and private information. Once a virus is installed on a computer, it will interact with a remote computer and transfer sensitive data to that computer. Antivirus software was developed to prevent viruses from being installed on computers. However, not all antivirus software is made equal. There are many antivirus software companies that offer to scan your computer for free. Yes, for free! An old saying relates that if something sounds too good to be true, it probably is. Free antivirus software is not anexception. Chester Wisniewky, at Sophos Labs, explains that once you have downloaded a free antivirus on to your computer, you have actually downloaded malware. Once you launch this software on your computer, it looks and behaves like a legitimate antivirus. Many of these free antivirus software packages are fully multilingual. The software has a very user-friendly GUI (graphical user interface) that looks and behaves like a legitimate antivirus. However, once you start scanning your computer, it will mark legitimate files on your computer as worms and Trojans and will give you a warning that your computer is infected. A regular user gets scared at this point and allows the software to remove the infected files. What is really happening is that malware is installed onTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 326326 Chapter 11 Network Securityyour computer that will scan for any sensitive information and send this information to a host. Rather than trying to get a free antivirus, spend money on a legitimate product such as Sophos, Symantec, or McAfee. Popular news magazines, such as PC Magazine, provide annual reviews of legitimate antivirus software and also the free antivirus. Your best protection against exploits of this kind is education.Adapted from: “Which Antivirus Is the Best” (www.pc antivirusreviews.com); “Fake Antivirus: What Are They and How Do You Avoid Them?” by Cassie Bodnar, October 11, 2013 (blog.kaspersky.com)Once a security hole is discovered, it is quickly circulated through the Internet. The race begins between hackers and security teams; hackers share their discovery with other hackers and security teams share the discovery with other security teams. CERT is the central clearinghouse for major Internet-related security holes, so the CERT team quickly responds to reports of new security problems and posts alerts and advisories on the Web and emails them to those who subscribe to its service. The developer of the software with the security hole usually works quickly to fix the security hole and produces a patch that corrects the hole. This patch is then shared with customers so they can download and apply it to their systems to prevent hackers from exploiting the hole to break in. Attacks that take advantage of a newly discovered security hole before a patch is developed are called zero-day attacks. One problem is that many network managers do not routinely respond to such security threats and immediately download and install the patch. Often it takes many months for patches to be distributed to most sites. Do you regularly install all the Windows or Mac updates on your computer? Other security holes are not really holes but simply policies adopted by computer vendors that open the door for security problems, such as computer systems that come with a variety of preinstalled user accounts. These accounts and their initial passwords are well documented and known to all potential attackers. Network managers sometimes forget to change the passwords on these well-known accounts, thus enabling an attacker to slip in. Operating Systems The American government requires certain levels of security in the operating systems and network operating systems it uses for certain applications. The minimum level of security is C2. Most major operating systems (e.g., Windows) provide at least C2. Most widely used systems are striving to meet the requirements of much higher security levels such as B2. Very few systems meet the highest levels of security (A1 and A2). There has been a long running debate about whether the Windows operating system is less secure than other operating systems such as Linux. Every new attack on Windows systems ignites the debate; Windows detractors repeat “I told you so” while Windows defenders state that this happens mostly because Windows is the obvious system to attack since it is the most commonly used operating system and because of the hostility of the Windows detractors themselves. There is a critical difference in what applications can do in Windows and in Linux. Linux (and its ancestor Unix) was first written as a multiuser operating system in which different users had different rights. Only some users were system administrators and had the rights to access and make changes to the critical parts of the operating system. All other users were barred from doing so.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 327Intrusion Prevention 327TECHNICAL11-4 Exploiting a Security HoleFOCUS To exploit a security hole, the hacker has to know it’s there. So how does a hacker find out? It’s simple in the era of automated tools. First, the hacker has to find the servers on a network. The hacker could start by using network scanning software to systematically probe every IP address on a network to find all the servers on the network. At this point, the hacker has narrowed the potential targets to a few servers. Second, the hacker needs to learn what services are available on each server. To do this, he or she could use port scanning software to systematically probe every TCP/IP port on a given server. This would reveal which ports are in use and thus what services the server offers. For example, if the server has software that responds to port 80, it is a Web server, while if it responds to port 25, it is a mail server. Third, the hacker would begin to seek out the exact software and version number of the server software providing each service. For example, suppose the hacker decides to target mail servers. There are a variety of tools that canprobe the mail server software, and based on how the server software responds to certain messages, determine which manufacturer and version number of software is being used. Finally, once the hacker knows which package and version number the server is using, the hacker uses tools designed to exploit the known security holes in the software. For example, some older mail server software packages do not require users to authenticate themselves (e.g., by a user id and password) before accepting SMTP packets for the mail server to forward. In this case, the hacker could create SMTP packets with fake source addresses and use the server to flood the Internet with spam (i.e., junk mail). In another case, a certain version of a well-known e-commerce package enabled users to pass operating system commands to the server simply by including a UNIX pipe symbol ( ) and the command to the name of a file name to be uploaded; when the system opened the uploaded file, it also executed the command attached to it.In contrast, Windows (and its ancestor DOS) was first written as an operating system for a single personal computer, an environment in which the user was in complete control of the computer and could do anything he or she liked. As a result, Windows applications regularly access and make changes to critical parts of the operating system. There are advantages to this. Windows applications can do many powerful things without the user needing to understand them. These applications can be very rich in features, and more important, they can appear to the user to be very friendly and easy to use. Everything appears to run “out-of-the-box” without modification. Windows has built these features into the core of their systems. Any major rewrite of Windows to prevent this would most likely cause significant incompatibilities with all applications designed to run under previous versions of Windows. To many, this would be a high price to pay for some unseen benefits called “security.” But there is a price for this friendliness. Hostile applications can easily take over the computer and literally do whatever they want without the user knowing. Simply put, there is a trade-off between ease of use and security. Increasing needs for security demand more checks and restrictions, which translates into less friendliness and fewer features. It may very well be that there is an inherent and permanent contradiction between the ease of use of a system and its security. Trojan Horses One important tool in gaining unauthorized access is a Trojan horse. Trojans are remote access management consoles (sometimes called rootkits) that enable users to access a computer and manage it from afar. If you see free software that will enable you to control your computer from anywhere, be careful; the software may also permit an attacker to control your computer from anywhere! Trojans are more often concealed in other software that unsuspecting users download over the Internet (their name alludes to the original Trojan horse). Music and video files shared on Internet music sites are commonTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 328328 Chapter 11 Network Security carriers of Trojans. When the user downloads and plays a music file, it plays normally and the attached Trojan software silently installs a small program that enables the attacker to take complete control of the user’s computer, so the user is unaware that anything bad has happened. The attacker then simply connects to the user’s computer and has the same access and controls as the user. Many Trojans are completely undetectable by the very best antivirus software. One of the first major Trojans was Back Orifice, which aggressively attacked Windows servers. Back Orifice gave the attacker the same functions as the administrator of the infected server, and then some: complete file and network control, device and registry access, with packet and application redirection. It was every administrator’s worst nightmare, and every attacker’s dream. More recently, Trojans have morphed into tools such as MoSucker and Optix Pro. These attack consoles now have one-button clicks to disable firewalls, antivirus software, and any other defensive process that might be running on the victim’s computer. The attacker can choose what port the Trojan runs on, what it is named, and when it runs. They can listen in to a computer’s microphone or look through an attached camera—even if the device appears to be off. Figure 11-15 shows a menu from one Trojan that illustrates some of the “fun stuff” that an attacker can do, such as opening and closing the CD tray, beeping the speaker, or reversing the mouse buttons so that clicking on the left button actually sends a right click. Not only have these tools become powerful, but they are also very easy to use—much easier to use than the necessary defensive countermeasures to protect oneself from them. And what does the near future hold for Trojans? We can easily envision Trojans that schedule themselves to run at, say 2:00 A.M., choosing a random port, emailing the attacker that the machine is now “open for business” at port # NNNNN. The attackers can then step in, do whatever they want to do, run a script to erase most of their tracks, and then sign out and shut off the Trojan. Once the job is done, the Trojan could even erase itself from storage. Scary? Yes. And the future does not look better.FIGURE 11-15 One menu on the control console for the Optix Pro Trojan Source: windowsecurity.comTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 329Intrusion Prevention 329Spyware, adware, and DDoS agents are three types of Trojans. DDoS agents were discussed in the previous section. As the name suggests, spyware monitors what happens on the target computer. Spyware can record keystrokes that appear to be userids and passwords so the intruder can gain access to the user’s account (e.g., bank accounts). Adware monitors a user’s actions and displays pop-up advertisements on the user’s screen. For example, suppose you clicked on the Web site for an online retailer. Adware might pop-up a window for a competitor, or, worse still, redirect your browser to the competitor’s Web site. Many antivirus software packages now routinely search for and remove spyware, adware, and other Trojans and special-purpose antispyware software is available (e.g., Spybot). Some firewall vendors are now adding anti-Trojan logic to their devices to block any transmissions from infected computers from entering or leaving their networks.11.4.4 Encryption One of the best ways to prevent intrusion is encryption, which is a means of disguising information by the use of mathematical rules known as algorithms. Actually, cryptography is the more general and proper term. Encryption is the process of disguising information, whereas decryption is the process of restoring it to readable form. When information is in readable form, it is called plaintext; when in encrypted form, it is called ciphertext. Encryption can be used to encrypt files stored on a computer or to encrypt data in transit between computers. There are two fundamentally different types of encryption: symmetric and asymmetric. With symmetric encryption, the key used to encrypt a message is the same as the one used to decrypt it. With asymmetric encryption, the key used to decrypt a message is different from the key used to encrypt it. MANAGEMENT11-7 Sony’s SpywareFOCUSSony BMG Entertainment, the music giant, included a spyware rootkit on audio CDs sold in the fall of 2005, including CDs by such artists as Celine Dion, Frank Sinatra, and Ricky Martin. The rootkit was automatically installed on any PC that played the infected CD. The rootkit was designed to track the behavior of users who might be illegally copying and distributing the music on the CD, with the goal of preventing illegal copies from being widely distributed. Sony made two big mistakes. First, it failed to inform customers who purchased its CDs about the rootkit, so users unknowingly installed it. The rootkit used standard spyware techniques to conceal its existence to prevent users from discovering it. Second, Sony used a widely available rootkit, which meant that any knowledgeable user on the Internet could use the rootkit to take control of the infected computer. Several viruses have been written that exploit the rootkit and are now circulating on the Internet. The irony is that rootkit infringes on copyrights held by several open source projects, which means Sonywas engaged in the very act it was trying to prevent: piracy. When the rootkit was discovered, Sony was slow to apologize, slow to stop selling rootkit-infected CDs, and slow to help customers remove the rootkit. Several lawsuits have been filed in the United States and abroad seeking damages. The Federal Trade Commission (FTC) found on January 30, 2007, that Sony BMG’s CD copy protection had violated Federal Law. Sony BMG had to reimburse consumers up to $150 to repair damages that were caused by the illegal software that was installed on users’ computers without their consent. This adventure proved to be very costly for Sony BMG. Adapted from: J.A. Halderman and E.W. Felton, “Lessons from the Sony CD DRM Episode,” working paper, Princeton University, 2006; and “Sony Anti-Customer Technology Roundup and Time-Line,” www.boingboing.net, February 15, 2006. Wikipedia.comTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 330330 Chapter 11 Network SecurityMANAGEMENT11-8 Trojans at HomeFOCUSIt started with a routine phone call to technical support—one of our users had a software package that kept crashing. The network technician was sent to fix the problem but couldn’t, so thoughts turned to a virus or Trojan. After an investigation, the security team found a remote FTP Trojan installed on the computer that was storing several gigabytes of cartoons and making them available across the Internet. The reason for the crash was that the FTP server was an old version that was not compatible with the computer’s operating system. The Trojan was removed and life went on. Three months later the same problem occurred on a different computer. Because the previous Trojan had been logged, the network support staff quickly recognized it as a Trojan. The same hacker had returned, storing the same cartoons on a different computer. This triggered a complete investigation. All computers on our Business School network were scanned and we found 15 computers that contained the Trojan. We gathered forensic evidence to help identify the attacker (e.g., log files, registry entries) and filed an incident report with the university incident response team, advising them to scan all computers on the university network immediately.The next day, we found more computers containing the same FTP Trojan and the same cartoons. The attacker had come back overnight and taken control of more computers. This immediately escalated the problem. We cleaned some of the machines but left some available for use by the hacker to encourage him not to attack other computers. The network security manager replicated the software and used it to investigate how the Trojan worked. We determined that the software used a brute force attack to break the administrative password file on the standard image that we used in our computer labs. We changed the password and installed a security patch to our lab computer’s standard configuration. We then upgraded all the lab computers and only then cleaned the remaining machines controlled by the attacker. The attacker had also taken over many other computers on campus for the same purpose. With the forensic evidence that we and the university security incident response team had gathered, the case is now in court. Source: Alan DennisSingle-Key Encryption Symmetric encryption (also called single-key encryption) has two parts: the algorithm and the key, which personalizes the algorithm by making the transformation of data unique. Two pieces of identical information encrypted with the same algorithm but with different keys produce completely different ciphertexts. With symmetric encryption, the communicating parties must share the one key. If the algorithm is adequate and the key is kept secret, acquisition of the ciphertext by unauthorized personnel is of no consequence to the communicating parties. Good encryption systems do not depend on keeping the algorithm secret. Only the keys need to be kept secret. The key is a relatively small numeric value (in terms of the number of bits). The larger the key, the more secure the encryption because large “key space” protects the ciphertext against those who try to break it by brute-force attacks—which simply means trying every possible key. There should be a large enough number of possible keys that an exhaustive brute-force attack would take inordinately long or would cost more than the value of the encrypted information. Because the same key is used to encrypt and decrypt, symmetric encryption can cause problems with key management; keys must be shared among the senders and receivers very carefully. Before two computers in a network can communicate using encryption, both must have the same key. This means that both computers can then send and read any messages that use that key. Companies often do not want one company to be able to read messages they send to another company, so this means that there must be a separate key used forTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 331Intrusion Prevention 331communication with each company. These keys must be recorded but kept secure so that they cannot be stolen. Because the algorithm is known publicly, the disclosure of the key means the total compromise of encrypted messages. Managing this system of keys can be challenging. One commonly used symmetric encryption technique is the Data Encryption Standard (DES), which was developed in the mid-1970s by the U.S. government in conjunction with IBM. DES is standardized by the National Institute of Standards and Technology (NIST). The most common form of DES uses a 56-bit key, which experts can break in less than a day (i.e., experts with the right tools can figure out what a message encrypted using DES says without knowing the key in less than 24 hours). DES is no longer recommended for data needing high security, although some companies continue to use it for less important data. Triple DES (3DES) is a newer standard that is harder to break. As the name suggests, it involves using DES three times, usually with three different keys to produce the encrypted text, which produces a stronger level of security because it has a total of 168 bits as the key (i.e., 3 times 56 bits). The NIST’s new standard, called Advanced Encryption Standard (AES), has replaced DES. AES has key sizes of 128, 192, and 256 bits. NIST estimates that, using the most advanced computers and techniques available today, it will require about 150 trillion years to crack AES by brute force. As computers and techniques improve, the time requirement will drop, but AES seems secure for the foreseeable future; the original DES lasted 20 years, so AES may have a similar life-span. Another commonly used symmetric encryption algorithm is RC4, developed by Ron Rivest of RSA Data Security, Inc. RC4 can use a key up to 256 bits long but most commonly uses a 40-bit key. It is faster to use than DES but suffers from the same problems from brute-force attacks: Its 40-bit key can be broken by a determined attacker in a day or two. Today, the U.S. government considers encryption to be a weapon and regulates its export in the same way it regulates the export of machine guns or bombs. Present rules prohibit the export of encryption techniques with keys longer than 64 bits without permission, although exports to Canada and the European Union are permitted, and American banks and Fortune 100 companies are now permitted to use more powerful encryption techniques in their foreign offices. This policy made sense when only American companies had the expertise to develop powerful encryption software. Today, however, many non-American companies are developing encryption software that is more powerful than American software that is limited only by these rules. Therefore, the American software industry is lobbying the government to change the rules so that they can successfully compete overseas. Public Key Encryption The most popular form of asymmetric encryption (also called public key encryption) is RSA, which was invented at MIT in 1977 by Rivest, Shamir, and Adleman, who founded RSA Data Security in 1982. The patent expired in 2000, so many new companies entered the market and public key software dropped in price. The RSA technique forms the basis for today’s public key infrastructure (PKI). Public key encryption is inherently different from symmetric single-key systems like DES. Because public key encryption is asymmetric, there are two keys. One key (called the public key) is used to encrypt the message and a second, very different private key is used to decrypt the message. Keys are often 512 bits, 1,024 bits, or 2,048 bits in length. Public key systems are based on one-way functions. Even though you originally know both the contents of your message and the public encryption key, once it is encrypted by the one-way function, the message cannot be decrypted without the private key. One-wayTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 332332 Chapter 11 Network Security functions, which are relatively easy to calculate in one direction, are impossible to “uncalculate” in the reverse direction. Public key encryption is one of the most secure encryption techniques available, excluding special encryption techniques developed by national security agencies. Public key encryption greatly reduces the key management problem. Each user has its public key that is used to encrypt messages sent to it. These public keys are widely publicized (e.g., listed in a telephone book-style directory)—that’s why they’re called “public” keys. In addition, each user has a private key that decrypts only the messages that were encrypted by its public key. This private key is kept secret (that’s why it’s called the “private” key). The net result is that if two parties wish to communicate with one another, there is no need to exchange keys beforehand. Each knows the other’s public key from the listing in a public directory and can communicate encrypted information immediately. The key management problem is reduced to the on-site protection of the private key. Figure 11-16 illustrates how this process works. All public keys are published in a directory. When Organization A wants to send an encrypted message to Organization B, it looks through the directory to find its public key. It then encrypts the message using B’s public key. This encrypted message is then sent through the network to Organization B, which decrypts the message using its private key. FIGURE 11-16 Secure transmission with public key encryptionOrganization A Plaintext message to BEncrypted using B's public keyEncrypted message to BTransmitted through networkOrganization BEncrypted message to BDecrypted using B's private keyPlaintext message to BTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 333Intrusion Prevention 333FIGURE 11-17 Authenticated and secure transmission with public key encryptionOrganization A Plaintext message to BEncrypted using A's private keyAuthenticated message to BEncrypted using B's public keyEncrypted Authenticated message to BTransmitted through network Organization B Decrypted using B's private keyAuthenticated message to BDecrypted using A's public keyEncrypted Authenticated message to BPlaintext message to BAuthentication Public key encryption also permits the use of digital signatures through a process of authentication. When one user sends a message to another, it is difficult to legally prove who actually sent the message. Legal proof is important in many communications, such as bank transfers and buy/sell orders in currency and stock trading, which normally require legal signatures. Public key encryption algorithms are invertable, meaning that text encrypted with either key can be decrypted by the other. Normally, we encrypt with the public key and decrypt with the private key. However, it is possible to do the inverse: encrypt with the private key and decrypt with the public key. Because the private key is secret, only the real user could use it to encrypt a message. Thus, a digital signature or authentication sequence is used as a legal signature on many financial transactions. This signature is usually the name of the signing party plus other key-contents such as unique information from the message (e.g., date, time, or dollar amount). This signature and the other key-contents are encrypted by the sender using the private key. The receiver uses the sender’s public key to decrypt the signature block and compares the result to the name and other key contents in the rest of the message to ensure a match. Figure 11-17 illustrates how authentication can be combined with public encryption to provide a secure and authenticated transmission with a digital signature. The plaintext message is first encrypted using Organization A’s private key and then encrypted using Organization’s B public key. It is then transmitted to B. Organization B first decrypts the message using its private key. It sees that part of the message (the key-contents) is still in cyphertext, indicating it is an authenticated message. B then decrypts the key-contents part of the message using A’s public key to produce the plaintext message. Since only A has the private key that matches A’s public key, B can safely assume that A sent the message.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 334334 Chapter 11 Network Security The only problem with this approach lies in ensuring that the person or organization who sent the document with the correct private key is actually the person or organization it claims to be. Anyone can post a public key on the Internet, so there is no way of knowing for sure who they actually are. For example, it would be possible for someone to create a Web site and claim to be “Organization A” when in fact the person is really someone else. This is where the Internet’s public key infrastructure (PKI) becomes important. The PKI is a set of hardware, software, organizations, and polices designed to make public key encryption work on the Internet. PKI begins with a certificate authority (CA), which is a trusted organization that can vouch for the authenticity of the person or organization using authentication (e.g., VeriSign). A person wanting to use a CA registers with the CA and must provide some proof of identity. There are several levels of certification, ranging from a simple confirmation from a valid email address to a complete police-style background check with an in-person interview. The CA issues a digital certificate that is the requestor’s public key encrypted using the CA’s private key as proof of identity. This certificate is then attached to the user’s email or Web transactions, in addition to the authentication information. The receiver then verifies the certificate by decrypting it with the CA’s public key—and must also contact the CA to ensure that the user’s certificate has not been revoked by the CA. For higher security certifications, the CA requires that a unique “fingerprint” be issued by the CA for each message sent by the user. The user submits the message to the CA, who creates the unique fingerprint by combining the CA’s private key with the message’s authentication key contents. Because the user must obtain a unique fingerprint for each message, this ensures that the CA has not revoked the certificate between the time it was issued and the time the message was sent by the user. Encryption Software Pretty Good Privacy (PGP) is a freeware public key encryption package developed by Philip Zimmermann that is often used to encrypt email. Users post their public key on Web pages, for example, and anyone wishing to send them an encrypted message simply cuts and pastes the key off the Web page into the PGP software, which encrypts and sends the message. Secure Sockets Layer (SSL) is an encryption protocol widely used on the Web. It operates between the application-layer software and the transport layer (in what the OSI model calls the presentation layer). SSL encrypts outbound packets coming out of the application layer before they reach the transport layer and decrypts inbound packets coming out of the transport layer before they reach the application layer. With SSL, the client and the server start with a handshake for PKI authentication and for the server to provide its public key and preferred encryption technique to the client (usually RC4, DES, 3DES, or AES). The client then generates a key for this encryption technique, which is sent to the server encrypted with the server’s public key. The rest of the communication then uses this encryption technique and key. IP Security Protocol (IPSec) is another widely used encryption protocol. IPSec differs from SSL in that SSL is focused on Web applications, whereas IPSec can be used with a much wider variety of application layer protocols. IPSec sits between IP at the network layer and TCP/UDP at the transport layer. IPSec can use a wide variety of encryption techniques, so the first step is for the sender and receiver to establish the technique and key to be used. This is done using Internet Key Exchange (IKE). Both parties generate a random key and send it to the other using an encrypted authenticated PKI process, and then put these two numbers together to produce the key. The encryption technique is also negotiated between the two, often being 3DES. Once the keys and technique have been established, IPSec can begin transmitting data. IP Security Protocol can operate in either transport mode or tunnel mode for VPNs. In IPSec transport mode, IPSec encrypts just the IP payload, leaving the IP packet headerTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 335Intrusion Prevention 335unchanged so it can be easily routed through the Internet. In this case, IPSec adds an additional packet (either an Authentication Header [AH] or an Encapsulating Security Payload [ESP]) at the start of the IP packet that provides encryption information for the receiver. In IPSec tunnel mode, IPSec encrypts the entire IP packet and must therefore add an entirely new IP packet that contains the encrypted packet as well as the IPSec AH or ESP packets. In tunnel mode, the newly added IP packet just identifies the IPSec encryption agent at the next destination, not the final destination; once the IPSec packet arrives at the encryption agent, the excrypted packet is VPN decrypted and sent on its way. In tunnel mode, attackers can only learn the endpoints of the VPN tunnel, not the ultimate source and destination of the packets.11.4.5 User Authentication Once the network perimeter and the network interior have been secured, the next step is to develop a way to ensure that only authorized users are permitted into the network and into specific resources in the interior of the network. This is called user authentication. The basis of user authentication is the user profile for each user’s account that is assigned by the network manager. Each user’s profile specifies what data and network resources he or she can access and the type of access (read only, write, create, delete). User profiles can limit the allowable log-in days, time of day, physical locations, and the allowable number of incorrect log-in attempts. Some will also automatically log a user out if that person has not performed any network activity for a certain length of time (e.g., the user has gone to lunch and has forgotten to log off the network). Regular security checks throughout the day when the user is logged in can determine whether a user is still permitted access to the network. For example, the network manager might have disabled the user’s profile while the user is logged in, or the user’s account may have run out of funds. Creating accounts and profiles is simple. When a new staff member joins an organization, that person is assigned a user account and profile. One security problem is the removal of user accounts when someone leaves an organization. Often, network managers are notTECHNICAL11-5 Cracking a PasswordFOCUS To crack Windows passwords, you just need to get a copy of the security account manager (SAM) file in the WINNT directory, which contains all the Windows passwords in an encrypted format. If you have physical access to the computer, that’s sufficient. If not, you might be able to hack in over the network. Then, you just need to use a Windows-based cracking tool such as LophtCrack. Depending on the difficulty of the password, the time needed to crack the password via brute force could take minutes or up to a day. Or that’s the way it used to be. Recently the Cryptography and Security Lab in Switzerland developed a new password-cracking tool that relies on very large amounts of RAM. It then does indexed searches of possible passwords that are already in memory. This tool can cut cracking times to less than 1/10 of the time of previous tools. Keep adding RAM and mHertz and you could reduce the cracktimes to 1/100 that of the older cracking tools. This means that if you can get your hands on the Windows-encrypted password file, then the game is over. It can literally crack complex passwords in Windows in seconds. It’s different for Linux, Unix, or Apple computers. These systems insert a 12-bit random “salt” to the password, which means that cracking their passwords will take 4,096 (2^12) times longer to do. That margin is probably sufficient for now, until the next generation of cracking tools comes along. Maybe. So what can we say from all of this? That you are 4,096 times safer with Linux? Well, not necessarily. But what we may be able to say is that strong password protection, by itself, is an oxymoron. We must combine it with other methods of security to have reasonable confidence in the system.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 336336 Chapter 11 Network Security informed of the departure and accounts remain in the system. For example, an examination of the user accounts at the University of Georgia found 30% belonged to staff members no longer employed by the university. If the staff member’s departure was not friendly, there is a risk that he or she may attempt to access data and resources and use them for personal gain, or destroy them to get back at the organization. Many systems permit the network manager to assign expiration dates to user accounts to ensure that unused profiles are automatically deleted or deactivated, but these actions do not replace the need to notify network managers about an employee’s departure as part of the standard human resources procedures.MANAGEMENT11-9 Selecting PasswordsFOCUSThe keys to users’ accounts are passwords—we all know this. The stronger the password, the more secure is your account. But what does it mean to have a “strong” password? We all heard that we shouldn’t pick keyboard patterns or names of family members or pets. But then different organizations have different rules for how to create strong passwords. Some might not give you any guidelines, whereas others are strict about how many uppercase letters you should use, numbers, and special characters you should use. The National Institute of Standards and Technology (NIST) advises that the password strength boils down to the number of bits of entropy that a password has. So how can we calculate these bits of entropy? NIST has proposed the following rules to calculate the number of bits of entropy for a password: 1. 2. 3. 4. 5.The first byte counts as 4 bits. The next 7 bytes count as 2 bits each. The next 12 bytes count as 1.5 bits each. Anything beyond that counts as 1 bit each. Mixed case + nonalphanumeric = 2 to 6 more bits, depending on complexity.For example, let’s evaluate the following password’s entropy: Pa$$w0rd (one you shouldn’t use). Recall that each letter is represented as 1 byte. ◾ The first byte counts as 4 bits; therefore, “P” gives us 4 bits of entropy. ◾ The next 7 bytes count as 2 bits each; therefore, “a$$w0rd” gives us 7 × 2 bits = 14 additional bits of entropy. ◾ Mixed case + nonalphanumeric can give us up to 6 extra bits. Let’s stay conservative and count 2 bits for these characters in our password, because the symbols are a close match for letters. The total number of bits of entropy for our password is 20. How long will it take to crack this password using a bruteforce attack? Well, we have 220 possibilities, and if a computer can guess 1,000 guesses per second it would take us approximately 17 minutes to break this password. We can agree that this is a very easy password to remember, but it is also very easy to break. So how can we increase our password strength without making it almost impossible to remember it? More companies are moving to passphrases instead of passwords. A passphrase is simply four or more words that is not a common phrase such as a line from a song or movie. Let’s look at the following password that uses four common words: horses love eating apples (without the spaces between the words). This password has 4 (for “h”) + 14 (for “orseslov”) + 18 (for “eeatingapple”) + 1 (for “s”) = 37 bits of entropy. It would take 4.35 years for a computer guessing 1,000 guesses per second to break this password. You can increase the strength of this password by adding spaces between the words or a few numbers at the end. This will then become a very easy password to remember but a very difficult one to crack. General rules: ◾ Use passphrases, not passwords. Choose three or four easily remembered words. ◾ Longer is better. We recommend passphrases that are at least 15 characters long. ◾ Don’t use the same passphrase everywhere. Instead, create a general passphrase you use but customize it for each site that requires a password by adding some numbers to it. For example, count the number of times the letter “a” appears in the URL of the website you are logging in to and add that to the end of your usual passphrase to create a unique passphrase just for that site. ◾ Always choose a unique passphrase for every high-risk site, such as your bank.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 337Intrusion Prevention 337Gaining access to an account can be based on something you know, something you have, or something you are. Passwords The most common approach is something you know, usually a password. Before users can log in, they need to enter a password. Unfortunately, passwords are often poorly chosen, enabling intruders to guess them and gain access. Some organizations are now requiring that users choose passwords that meet certain security requirements, such as a minimum length or including numbers and/or special characters (e.g., $, #, !). Some have moved to passphrases which, as the name suggests, are a series of words separated by spaces. Using complex passwords and passphrases has also been called one of the top five least effective security controls because it can frustrate users and lead them to record their passwords in places from which they can be stolen. Management Focus 11.9 offers some suggestions on how to create a strong password that is easy to remember. Access Cards Requiring passwords provides, at best, midlevel security (much like locking your doors when you leave the house); it won’t stop the professional intruder, but it will slow amateurs. Nonetheless, most organizations today use only passwords. About a third of organizations go beyond this and are requiring users to enter a password in conjunction with something they have, an access card. A smart card is a card about the size of a credit card that contains a small computer chip. This card can be read by a device, and to gain access to the network, the user must present both the card and the password. Intruders must have access to both before they can break in. The best example of this is the automated teller machine (ATM) network operated by your bank. Before you can gain access to your account, you must have both your ATM card and the access number. Another approach is to use one-time passwords. The user connects into the network as usual, and after the user’s password is accepted, the system generates a one-time password. The user must enter this password to gain access, otherwise the connection is terminated. The user can receive this one-time password in a number of ways (e.g., via a pager). Other systems provide the user with a unique number that must be entered into a separate handheld device (called a token), which in turn displays the password for the user to enter. Other systems use time-based tokens in which the one-time password is changed every 60 seconds. The user has a small card (often attached to a key chain) that is synchronized with the server and displays the one-time password. With any of these systems, an attacker must know the user’s account name and password and have access to the user’s password device before he or she can log in. Biometrics In high-security applications, a user may be required to present something he or she is, such as a finger, hand, or the retina of the eye for scanning by the system. These biometric systems scan the user to ensure that the user is the sole individual authorized to access the network account. About 15% of organizations now use biometrics. Although most biometric systems are developed for high-security users, several low-cost biometric systems are now on the market. The most popular biometric system is the fingerprint scanner. Several vendors sell devices for less than $100 that are the size of a mouse and that can scan a user’s fingerprint. Some laptops now come with built-in fingerprint scanners that replace traditional Windows logins. Although some banks have begun using fingerprint devices for customer access to their accounts over the Internet, use of such devices has not become widespread, which we find a bit puzzling. The fingerprint is unobtrusive and means users no longer have to remember arcane passwords. Central Authentication One long-standing problem has been that users are often assigned user profiles and passwords on several different computers. Each time a user wants to access a new server, he or she must supply his or her password. This is cumbersome for the users,Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 338338 Chapter 11 Network Security and even worse for the network manager who must manage all the separate accounts for all the users. More and more organizations are adopting central authentication (also called network authentication, single sign-on, or directory services), in which a log-in server is used to authenticate the user. Instead of logging into a file server or application server, the user logs into the authentication server. This server checks the user ID and password against its database and, if the user is an authorized user, issues a certificate (also called credentials). Whenever the user attempts to access a restricted service or resource that requires a user ID and password, the user is challenged, and his or her software presents the certificate to the authentication server (which is revalidated by the authentication server at the time). If the authentication server validates the certificate, then the service or resource lets the user in. In this way, the user no longer needs to enter his or her password to be authenticated to each new resource or service he or she uses. This also ensures that the user does not accidentally give out his or her password to an unauthorized service—it provides mutual authentication of both the user and the service or resource. The most commonly used authentication protocol is Kerberos, developed at MIT (see web.mit.edu /kerberos/www). Although many systems use only one authentication server, it is possible to establish a series of authentication servers for different parts of the organization. Each server authenticates clients in its domain but can also pass authentication credentials to authentication servers in other domains.11.4.6 Preventing Social Engineering One of the most common ways for attackers to break into a system, even master hackers, is through social engineering, which refers to breaking security simply by asking. For example, attackers routinely phone unsuspecting users and, imitating someone such as a technician or senior manager, ask for a password. Unfortunately, too many users want to be helpful and simply provide the requested information. At first, it seems ridiculous to believe that someone would give his or her password to a complete stranger, but a skilled social engineer is like a good con artist: he—and most social engineers are men—can manipulate people. Most security experts no longer test for social engineering attacks; they know from experience that social engineering will eventually succeed in any organization and therefore assume that attackers can gain access at will to normal user accounts. Training end users not to divulge passwords may not eliminate social engineering attacks, but it may reduce their effectiveness so that hackers give up and move on to easier targets. Acting out social engineering skits in front of users often works very well; when employees see how they can be manipulated into giving out private information, it becomes more memorable and they tend to become much more careful. Phishing is a very common type of social engineering. The attacker simply sends an email to millions of users telling them that their bank account has been shut down due to an unauthorized access attempt and that they need to reactivate it by logging in. The email contains a link that directs the user to a fake Web site that appears to be the bank’s Web site. After the user logs into the fake site, the attacker has the user’s user ID and password and can break into his or her account at will. Clever variants on this include an email informing you that a new user has been added to your PayPal account, stating that the IRS has issued you a refund and you need to verify your social security number, or offering a mortgage at very low rate for which you need to provide your social security number and credit card number.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 339Intrusion Prevention 339TECHNICAL11-6 Inside KerberosFOCUS Kerberos, the most commonly used central authentication protocol, uses symmetric encryption (usually DES). Kerberos is used by a variety of central authentication services, including Windows active directory services. When you log in to a Kerberos-based system, you provide your user ID and password to the Kerberos software on your computer. This software sends a request containing the user ID but not the password to the Kerberos authentication server (called the Key Distribution Center [KDC]). The KDC checks its database for the user ID, and if it finds it, then it accepts the log-in and does two things. First, it generates a service ticket (ST) for the KDC that contains information about the KDC, a time stamp, and, most importantly, a unique session key (SK1), which will be used to encrypt all further communication between the client computer and the KDC until the user logs off. SK1 is generated separately for each user and is different every time the user logs in. Now, here’s the clever part: The ST is encrypted using a key based on the password that matches the user ID. The client computer can only decrypt the ST if it knows the password that matches the user ID used to log in. If the user enters an incorrect password, the Kerberos software on the client can’t decrypt the ST and asks the user to enter a new password. This way, the password is never sent over the network. Second, the KDC creates a Ticket-Granting Ticket (TGT). The TGT includes information about the client computer and a time stamp that is encrypted using a secret key known only to the KDC and other validated servers. The KDC sends the TGT to the client computer encrypted with SK1, because all communications between the client and the server are encrypted with SK1 (so no one else can read the TGT). The client decrypts the transmission to receive the TGT, but because the client does not knowthe KDC’s secret key, it cannot decrypt the contents of the TGT. From now until the user logs off, the user does not need to provide his or her password again; the Kerberos client software will use the TGT to gain access to all servers that require a password. The first time a user attempts to use a server that requires a password, that server directs the user’s Kerberos software to obtain a service ticket (ST) for it from the KDC. The user’s Kerberos software sends the TGT to the KDC along with information about which server the user wants to access (remember that all communications between the client and the KDC are encrypted with SK1). The KDC checks to make sure that the user has not logged off, and if the TGT is validated, the KDC sends the client an ST for the desired server and a new session key (SK2) that the client will use to communicate with that server, both of which have been encrypted using SK1. The ST contains authentication information and SK2, both of which have been encrypted using the secret key known only to the KDC and the server. The client presents a log-in request (which specifies the user ID, a time and date stamp, and other information) that has been encrypted with SK2 and the ST to the server. The server decrypts the ST using the KDC’s secret key to find the authentication information and SK2. It uses the SK2 to decrypt the log-in request. If the log-in request is valid after decrypting with SK2, the server accepts the log-in and sends the client a packet that contains information about the server that has been encrypted with SK2. This process authenticates the client to the server and also authenticates the server to the client. Both now communicate using SK2. Notice that the server never learns the user’s password.11.4.7 Intrusion Prevention Systems Intrusion prevention systems (IPS) are designed to detect an intrusion and take action to stop it. There are two general types of IPS, and many network managers choose to install both. The first type is a network-based IPS. With a network-based IPS, an IPS sensor is placed on key network circuits. An IPS sensor is simply a device running a special operating system that monitors all network packets on that circuit and reports intrusions to an IPS management console. The second type of IPS is the host-based IPS, which, as the name suggests, is a software package installed on a host or server. The host-based IPS monitors activity on the server and reports intrusions to the IPS management console. There are two fundamental techniques that these types of IPSs can use to determine that an intrusion is in progress; most IPSs use both techniques. The first technique is misuse detection, which compares monitored activities with signatures of known attacks.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 340340 Chapter 11 Network Security sss MANAGEMENT11-10 Social Engineering Wins AgainFOCUSDanny had collected all the information he needed to steal the plans for the new product. He knew the project manager’s name (Bob Billings), phone number, department name, office number, computer user ID, and employee number, as well as the project manager’s boss’s name. These had come from the company Web site and a series of innocuous phone calls to helpful receptionists. He had also tricked the project manager into giving him his password, but that hadn’t worked because the company used one-time passwords using a time-based token system called Secure ID. So, after getting the phone number of the computer operations room from another helpful receptionist, all he needed was a snowstorm. Late one Friday night, a huge storm hit and covered the roads with ice. The next morning, Danny called the computer operations room: Danny:“Hi, this is Bob Billings in the Communications Group. I left my Secure ID in my desk and I need it to do some work this weekend. There’s no way I can get into the office this morning. Could you go down to my office and get it for me? And then read my code to me so I can log in?” Operations: “Sorry, I can’t leave the Operations Center.” Danny: “Do you have a Secure ID yourself?” Operations: “There’s one here we keep for emergencies.”Danny:“Listen. Can you do me a big favor? Could you let me borrow your Secure ID? Just until it’s safe to drive in?” Operations: “Who are you again?” Danny: “Bob Billings. I work for Ed Trenton.” Operations: “Yeah, I know him.” Danny: “My office is on the second floor (2202B). Next to Roy Tucker. It’d be easier if you could just get my Secure ID out of my desk. I think it’s in the upper left drawer.” (Danny knew the guy wouldn’t want to walk to a distant part of the building and search someone else’s office.) Operations: “I’ll have to talk to my boss.” After a pause, the operations technician came back on and asked Danny to call his manager on his cell phone. After talking with the manager and providing some basic information to “prove” he was Bob Billings, Danny kept asking about having the Operations technician go to “his” office. Finally, the manager decided to let Danny use the Secure ID in the Operations Center. The manager called the technician and gave permission for him to tell “Bob” the one-time password displayed on their Secure ID any time he called that weekend. Danny was in. Adapted from: Kevin Mitnick and William Simon, The Art of Deception, John Wiley and Sons, 2002.Whenever an attack signature is recognized, the IPS issues an alert and discards the suspicious packets. The problem, of course, is keeping the database of attack signatures up to date as new attacks are invented. The second fundamental technique is anomaly detection, which works well in stable networks by comparing monitored activities with the “normal” set of activities. When a major deviation is detected (e.g., a sudden flood of ICMP ping packets, an unusual number of failed log-ins to the network manager’s account), the IPS issues an alert and discards the suspicious packets. The problem, of course, is false alarms when situations occur that produce valid network traffic that is different from normal (e.g., on a heavy trading day on Wall Street, e-trade receives a larger than normal volume of messages). Intrusion prevention systems are often used in conjunction with other security tools such as firewalls (Figure 11-18). In fact, some firewalls are now including IPS functions. One problem is that the IPS and its sensors and management console are a prime target for attackers. Whatever IPS is used, it must be very secure against attack. Some organizations deploy redundant IPSs from different vendors (e.g., a network-based IPS from one vendor and a host-based IPS from another) to decrease the chance that the IPS can be hacked.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 341Intrusion Prevention 341InternetFirewall Network-Based IPS SensorRouterRouterWeb Server with Host-Based IPS and Application-Based IPSInternal SubnetNAT Firewall with Network-Based IPSSwitchInternal SubnetRouterSwitchMail Server with Host-Based IPSDMZNetwork-Based IPS Sensor DNS Server with Host-Based IPS IPS Management ConsoleInternal SubnetFIGURE 11-18 Intrusion prevention system (IPS) DMZ = demilitarized zone; DNS = Domain Name Service; NAT = network address translation Although IPS monitoring is important, it has little value unless there is a clear plan for responding to a security breach in progress. Every organization should have a clear response planned if a break-in is discovered. Many large organizations have emergency response “SWAT” teams ready to be called into action if a problem is discovered. The best example is CERT, which is the Internet’s emergency response team. CERT has helped many organizations establish such teams. Responding to an intrusion can be more complicated than it at first seems. For example, suppose the IPS detects a DoS attack from a certain IP address. The immediate reaction could be to discard all packets from that IP address; however, in the age of IP spoofing, the attacker could fake the address of your best customer and trick you into discarding packets from it.11.4.8 Intrusion Recovery Once an intrusion has been detected, the first step is to identify how the intruder gained unauthorized access and prevent others from breaking in the same way. Some organizations will simply choose to close the door on the attacker and fix the security problem. About 30% of organizations take a more aggressive response by logging the intruder’s activities and working with police to catch the individuals involved. Once identified, the attacker will be charged with criminal activities and/or sued in civil court. Several states and provinces have introduced laws requiring organizations to report intrusions and theft of customer data, so the percentage of intrusions reported and prosecuted will increase.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 342342 Chapter 11 Network Security A whole new area called computer forensics has recently opened up. Computer forensics is the use of computer analysis techniques to gather evidence for criminal and/or civil trials. The basic steps of computer forensics are similar to those of traditional forensics, but the techniques are different. First, identify potential evidence. Second, preserve evidence by making backup copies and use those copies for all analysis. Third, analyze the evidence. Finally, prepare a detailed legal report for use in prosecutions. Although companies are sometimes tempted to launch counterattacks (or counterhacks) against intruders, this is illegal. Some organizations have taken their own steps to snare intruders by using entrapment techniques. The objective is to divert the attacker’s attention from the real network to an attractive server that contains only fake information. This server is often called a honey pot. The honey pot server contains highly interesting, fake information available only through illegal intrusion to “bait” the intruder. The honey pot server has sophisticated tracking software to monitor access to this information that allows the organization and law enforcement officials to trace and legally document the intruder’s actions. Possession of this information then becomes final legal proof of the intrusion.11.5 BEST PRACTICE RECOMMENDATIONS This chapter provides numerous suggestions on business continuity planning and intrusion prevention. Good security starts with a clear disaster recovery plan and a solid security policy. Probably the best security investment is user training: training individual users on data recovery and ways to defeat social engineering. But this doesn’t mean that technologies aren’t needed either. Figure 11-19 shows the most commonly used security controls. Most organizations now routinely use antivirus software, firewalls, VPNs, encryption, and IPS. Even so, rarely does a week pass without a new warning of a major vulnerability. Leave a server unattended for two weeks, and you may find that you have five critical patches to install. People are now asking, “Will it end?” Is (in)security just a permanent part of the information systems landscape? In a way, yes. The growth of information systems, along with the new and dangerous ability to reach into them from around the world, has created new opportunities for criminals. Mix the possibilities of stealing valuable, marketable information with the low possibilities for getting caught and punished, and we would expect increasing numbers of attacks.FIGURE 11-19 What security controls are usedPercentage of Organizations Using Antivirus Software Firewalls Encryption in Transit Encryption on Servers IPS Application Firewalls 0%20%40%60%80%100%Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 343Best Practice Recommendations 343Perhaps the question should be: Does it have to be this bad? Unquestionably, we could be protecting ourselves better. We could better enforce security policies and restrict access. But all of this has a cost. Attackers are writing and distributing a new generation of attack tools right before us—tools that are very powerful, more difficult to detect, and very easy to use. Usually such tools are much easier to use than their defensive countermeasures. The attackers have another advantage, too. Whereas the defenders have to protect all vulnerable points all the time to be safe, the attacker just has to break into one place one time to be successful. So what may we expect in the future in “secure” organizational environments? We would expect to see strong desktop management, including the use of thin clients. Centralized desktop management, in which individual users are not permitted to change the settings on their computers, may become common, along with regular reimaging of computers to prevent Trojans and viruses and to install the most recent security patches. All external software downloads will likely be prohibited. Continuous content filtering, in which all incoming packets (e.g., Web, email) are scanned, may become common, thus significantly slowing down the network. All server files and communications with client computers would be encrypted, further slowing down transmissions. Finally, all written security policies would be rigorously enforced. Violations of security policies might even become a “capital offense” (i.e., meaning one violation and you are fired). We may look forlornly back to the early days of the Internet when we could “do anything” as its Golden Days.A Day in the Life: Network Security Manager “Managing security is a combination of detective work and prognostication about the future.” A network security manager spends much of his or her time doing three major things. First, much time is spent looking outside the organization by reading and researching potential security holes and new attacks because the technology and attack opportunities change so fast. It is important to understand new attack threats, new scripting tools used to create viruses, remote access Trojans and other harmful software, and the general direction in which the hacking community is moving. Much important information is contained at Web sites such as those maintained by CERT (www.cert.org) and SANS (www.sans.org). This information is used to create new versions of standard computer images that are more robust in defeating attacks and to develop recommendations for the installation of application security patches. It also means that he or she must update the organization’s written security policies and inform users of any changes. Second, the network security manager looks inward toward the networks he or she is responsible for. He or she must check the vulnerability of those networks by thinking like a hacker to understand how the networks may be susceptible to attack, which often means scanning for open ports and unguarded parts of the networks and looking for computers that have not been updated with the latest security patches. It also means looking for symptoms of compromised machines such as new patterns of network activity or unknown services that have been recently opened on a computer. Third, the network security manager must respond to security incidents. This usually means “firefighting”—quickly responding to any security breach, identifying the cause, collecting forensic evidence for use in court, and fixing the computer or software application that has been compromised. Source: With thanks to Kenn CrookTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 344344 Chapter 11 Network Security11.6 IMPLICATIONS FOR MANAGEMENT Network security was once an esoteric field of interest to only a few dedicated professionals. Today, it is the fastest-growing area in networking. The cost of network security will continue to increase as the tools available to network attackers become more sophisticated, as organizations rely more and more on networks for critical business operations, and as information warfare perpetrated by nations or terrorists becomes more common. As the cost of networking technology decreases, the cost of staff and networking technologies providing security will become an increasingly larger proportion of an organization’s networking budget. As organizations and governments see this, there will be a call for tougher laws and better investigation and prosecution of network attackers. Security tools available to organizations will continue to increase in sophistication, and the use of encryption will become widespread in most organizations. There will be an ongoing “arms race” between security officers in organizations and attackers. Software security will become an important factor in selecting operating systems, networking software, and application software. Those companies that provide more secure software will see a steady increase in market share, whereas those that don’t will gradually lose ground.SUMMARYTypes of Security Threats In general, network security threats can be classified into one of two categories: (1) business continuity and (2) intrusions. Business continuity can be interrupted by disruptions that are minor and temporary, but some may also result in the destruction of data. Natural (or man-made) disasters may occur that destroy host computers or large sections of the network. Intrusion refers to intruders (external attackers or organizational employees) gaining unauthorized access to files. The intruder may gain knowledge, change files to commit fraud or theft, or destroy information to injure the organization.Risk Assessment Developing a secure network means developing controls that reduce or eliminate threats to the network. Controls prevent, detect, and correct whatever might happen to the organization when its computer-based systems are threatened. The first step in developing a secure network is to conduct a risk assessment. This is done by identifying the key assets and threats and comparing the nature of the threats to the controls designed to protect the assets. A company can pick one of several risk assessment frameworks that are considered to be industry standards.Business Continuity The major threats to business continuity are viruses, theft, denial of service attacks, device failure, and disasters. Installing and regularly updating antivirus software is one of the most important and commonly used security controls. Protecting against denial of service attacks is challenging and often requires special hardware. Theft is one of the most often overlooked threats and can be prevented by good physical security, especially the physical security of laptop computers. Devices fail, so the best way to prevent network outages is to ensure that the network has redundant circuits and devices (e.g., switches and routers) on mission-critical network segments (e.g., the Internet connection and core backbone). Avoiding disasters can take a few commonsense steps, but no disaster can be completely avoided; most organizations focus on ensuring important data are backed up off-site and having a good, tested disaster recovery plan.Intrusion Prevention Intruders can be organization employees or external hackers who steal data (e.g., customer credit card numbers) or destroy important records. A security policy defines the key stakeholders and their roles, including what users can and cannot do. Firewalls often stop intruders at the network perimeter by permitting onlyTrimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 345Keyterms 345authorized packets into the network, by examining application layer packets for known attacks, and/or by hiding the organization’s private IP addresses from the public Internet. Physical and dial-up security are also useful perimeter security controls. Patching security holes—known bugs in an operating system or application software package—is important to prevent intruders from using these to break in. Single key or public key encryption can protect data in transit or data stored on servers. User authentication ensures only authorized users can enter the network and can be based on something you know (passwords), something you have (access cards), or something you are (biometrics). Preventing social engineering, where hackers trick users into revealing their passwords, is very difficult. Intrusion prevention systems are tools that detect known attacks and unusual activity and enable network managers to stop an intrusion in progress. Intrusion recovery involves correcting any damaged data, reporting the intrusion to the authorities, and taking steps to prevent the other intruders from gaining access the same way.KEY TERMS access card, 337 access control list (ACL), 320 account, 335 Advanced Encryption Standard (AES), 331 adware, 329 algorithm, 330 anomaly detection, 340 antivirus software, 309 application-level firewall, 321 asymmetric encryption, 329 authentication, 333 authentication server, 338 availability, 298 backup controls, 315 biometric system, 337 brute-force attack, 330 business continuity, 298 central authentication, 338 certificate, 338 certificate authority (CA), 334 ciphertext, 329 computer forensics, 342 confidentiality, integrity, and availability (CIA), 298 continuous data protection (CDP), 316 controls, 300 corrective control, 300cracker, 318 DDoS agent, 310 DDoS handler, 310 decryption, 329 denial-of-service (DoS) attack, 310 desktop management, 343 detective control, 300 disaster recovery drill, 316 disaster recovery firm, 318 disaster recovery plan, 315 disk mirroring, 314 distributed denial-of-service (DDoS) attack, 310 eavesdropping, 324 encryption, 329 entrapment, 342 fault-tolerant server financial impact, 314 firewall, 320 hacker, 318 honey pot, 342 host-based, 339 information warfare, 318 integrity, 298 Internet Key Exchange (IKE), 334 intrusion prevention systems (IPS), 339 IP Security Protocol (IPSec), 334 IP spoofing, 321IPS management console, 339 IPS sensor, 339 IPSec transport mode, 334 IPSec tunnel mode, 335 Kerberos, 338 key, 330 key management, 330 misuse detection, 339 NAT firewall, 322 network address translation (NAT), 322 network-based IPS, 339 one-time password, 337 online backup, 317 packet-level firewall, 320 passphrase, 337 password, 337 patch, 326 phishing, 338 physical security, 313 plaintext, 329 Pretty Good Privacy (PGP), 334 preventive control, 300 private key, 331 public key, 331 public key encryption 331 public key infrastructure (PKI) 331 RC4, 331 recovery controls, 315 redundancy, 313redundant array of independent disks (RAID), 314 risk assessment, 301 risk assessment frameworks, 301 risk mitigation controls, 307 risk score, 305 rootkit, 327 RSA, 331 Secure Sockets Layer (SSL), 334 secure switch, 325 security hole, 325 security policy, 319 smart card, 337 sniffer program, 325 social engineering, 338 something you are, 337 something you have, 337 something you know, 337 spyware, 329 symmetric encryption, 329 threat, 304 threat scenario, 304 time-based token, 337 token, 337 traffic analysis, 311 traffic anomaly analyzer, 312 traffic anomaly detector, 311 traffic filtering, 310Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 346346 Chapter 11 Network Security traffic limiting, 310 triple DES (3DES), 331Trojan horse, 327 uninterruptible power supply (UPS), 314user profile, 335 user authentication, 335 virus, 309worm, 309 zero-day attack, 326QUESTIONS 1. What factors have brought increased emphasis on network security? 2. Briefly outline the steps required to complete a risk assessment. 3. Name and describe the main impact areas. Who should be responsible for assessing what is meant by low/medium/high impact for each of the impact areas? Explain your answer. 4. What are some of the criteria that can be used to rank security risks? 5. What are the most common security threats? What are the most critical? Why? 6. Explain the purpose of threat scenarios. What are the steps in preparing threat scenarios? 7. What is the purpose of the risk score, and how is it calculated? 8. In which step of the risk assessment should existing controls be documented? 9. What are the four possible risk control strategies? How do we pick which one to use? 10. Why is it important to identify improvements that are needed to mitigate risks? 11. What is the purpose of a disaster recovery plan? What are five major elements of a typical disaster recovery plan? 12. What is a computer virus? What is a worm? 13. Explain how a denial-of-service attack works. 14. How does a denial-of-service attack differ from a distributed denial-of-service attack? 15. What is a disaster recovery firm? When and why would you establish a contract with them? 16. What is online backup? 17. People who attempt intrusion can be classified into four different categories. Describe them. 18. There are many components in a typical security policy. Describe three important components. 19. What are three major aspects of intrusion prevention (not counting the security policy)? 20. How do you secure the network perimeter? 21. What is physical security, and why is it important? 22. What is eavesdropping in a computer security sense? 23. What is a sniffer? 24. How do you secure dial-in access? 25. What is a firewall?26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55.How do the different types of firewalls work? What is IP spoofing? What is a NAT firewall, and how does it work? What is a security hole, and how do you fix it? Explain how a Trojan horse works. Compare and contrast symmetric and asymmetric encryption. Describe how symmetric encryption and decryption work. Describe how asymmetric encryption and decryption work. What is key management? How does DES differ from 3DES? From RC4? From AES? Compare and contrast DES and public key encryption. Explain how authentication works. What is PKI, and why is it important? What is a certificate authority? How does PGP differ from SSL? How does SSL differ from IPSec? Compare and contrast IPSec tunnel mode and IPSec transfer mode. What are the three major ways of authenticating users? What are the pros and cons of each approach? What are the different types of one-time passwords and how do they work? Explain how a biometric system can improve security. What are the problems with it? Why is the management of user profiles an important aspect of a security policy? How does network authentication work, and why is it useful? What is social engineering? Why does it work so well? What techniques can be used to reduce the chance that social engineering will be successful? What is an intrusion prevention system? Compare and contrast a network-based IPS and a host-based IPS. How does IPS anomaly detection differ from misuse detection? What is computer forensics? What is a honey pot? What is desktop management?Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 347Minicases 34756. A few security consultants have said that broadband and wireless technologies are their best friends. Explain. 57. Most hackers start their careers breaking into computer systems as teenagers. What can we as a community of computer professionals do to reduce the temptation to become a hacker? 58. Some experts argue that CERT’s posting of security holes on its Web site causes more security break-insthan it prevents and should be stopped. What are the pros and cons on both sides of this argument? Do you think CERT should continue to post security holes? 59. What is one of the major risks of downloading unauthorized copies of music files from the Internet (aside from the risk of jail, fines, and lawsuits)? 60. Although it is important to protect all servers, some servers are more important than others. What server(s) are the most important to protect, and why?EXERCISES A. Conduct a risk assessment of your organization’s networks. Some information may be confidential, so report what you can. B. Investigate and report on the activities of CERT (the Computer Emergency Response Team). C. Investigate the capabilities and costs of a disaster recovery service.D. Investigate the capabilities and costs of a firewall. E. Investigate the capabilities and costs of an intrusion prevention system. F. Investigate the capabilities and costs of an encryption package. G. Investigate the capabilities and costs of an online backup service.MINICASES I. Belmont State Bank Belmont State Bank is a large bank with hundreds of branches that are connected to a central computer system. Some branches are connected over dedicated circuits and others use Multiprotocol Label Switching (MPLS). Each branch has a variety of client computers and ATMs connected to a server. The server stores the branch’s daily transaction data and transmits it several times during the day to the central computer system. Tellers at each branch use a four-digit numeric password, and each teller’s computer is transaction-coded to accept only its authorized transactions. Perform a risk assessment. II. Western Bank Western Bank is a small, family-owned bank with six branches spread over the county. It has decided to move onto the Internet with a Web site that permits customers to access their accounts and pay bills. Design the key security hardware and software the bank should use. III. Classic Catalog Company, Part 1 Classic Catalog Company runs a small but rapidly growing catalog sales business. It outsourced its Web operations to a local ISP for several years, but as sales over the Web have become a larger portion of its business, it has decided to move its Web site onto its own internal computer systems. It has also decided to undertakea major upgrade of its own internal networks. The company has two buildings, an office complex, and a warehouse. The two-story office building has 60 computers. The first floor has 40 computers, 30 of which are devoted to telephone sales. The warehouse, located 400 feet across the company’s parking lot from the office building, has about 100,000 square feet, all on one floor. The warehouse has 15 computers in the shipping department located at one end of the warehouse. The company is about to experiment with using wireless handheld computers to help employees more quickly locate and pick products for customer orders. Based on traffic projections for the coming year, the company plans to use a T1 connection from its office to its ISP. It has three servers: the main Web server, an email server, and an internal application server for its application systems (e.g., orders, payroll). Perform a risk assessment. IV. Classic Catalog Company, Part 2 Read MINICASES III above. Outline a brief business continuity plan, including controls to reduce the risks in advance as well as a disaster recovery plan. V. Classic Catalog Company, Part 3 Read MINICASES III above. Outline a brief security policy and the controls you would implement to control unauthorized access.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 348348 Chapter 11 Network Security VI. Classic Catalog Company, Part 4 Read MINICASES III above. What patching policy would you recommend for Classic Catalog? VII. Personal Password Storage and Protection To help us not forget our many passwords, there are severalcompanies that provide password managers. Find the top 5 password manager programs, compare their features and costs, and make a presentation of your findings to your classmates.CASE STUDY NEXT-DAY AIR SERVICE See the Web site at www.wiley.com/college/fitzgeraldHANDS-ON ACTIVITY 11A Securing Your Computer This chapter has focused on security, including risk analysis, business continuity, and intrusion prevention. At first glance, you may think security applies to corporate networks, not your network. However, if you have a LAN at your house or apartment, or even if you just own a desktop or laptop computer, security should be one of your concerns. There are so many potential threats to your business continuity—which might be your education—and to intrusion into your computer(s) that you need to take action. You should perform your own risk analysis, but this section provides a brief summary of some simple actions you should take that will greatly increase your security. Do this this week; don’t procrastinate. Our focus is on Windows security, because most readers of this book use Windows computers, but the same advice (but different commands) applies to Apple computers. Business Continuity If you run your own business, then ensuring business continuity should be a major focus of your efforts. But even if you are “just” an employee or a student, business continuity is important. What would happen if your hard disk failed just before the due date for a major report? 1. The first and most important security action you can take is to configure Windows to perform automatic updates. This will ensure you have the latest patches and updates installed. 2. The second most important action is to buy and install antivirus software such as that from Symantec. Be sure to configure it for regular updates too. If you perform just these two actions, you will berelatively secure from viruses, but you should scan your system for viruses on a regular basis, such as the first of every month, when you pay your rent or mortgage. 3. Spyware is another threat. You should buy and install antispyware software that provides the same protection that antivirus software does for viruses. Spybot is a good package. Be sure to configure this software for regular updates and scan your system on a regular basis. 4. One of the largest sources of viruses, spyware and adware are free software and music/video files downloaded from the Internet. Simply put, don’t download any file unless it is from a trusted vendor or distributor of software and files. 5. Develop a disaster recovery plan. You should plan today for what you would do if your computer were destroyed. What files would you need? If there are any important files that you wouldn’t want to lose (e.g., reports you’re working on, key data, or precious photos), you should develop a backup and recovery plan for them. The simplest is to copy the files to a shared directory on another computer on your LAN. But this won’t enable you to recover the files if your apartment or house was destroyed by fire, for example. A better plan is to suscribe to a free online backup service such as mozy.com (think CDP on the cheap). If you don’t use such a site, buy a large USB drive, copy your files to it, and store it off-site in your office or at a friend’s house. A plan is only good if it is followed, so your data should be regularly backed up, such as doing so the first of every month.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 349Hands-On Activity 11B 349Deliverables 1. Perform risk analysis for your home network.3. Research antivirus and antispyware software that you can purchase for your home network.2. Prepare a disaster recovery plan for your home network.HANDS-ON ACTIVITY 11B How to set up encryption on your computer If you want to protect the data on your computer, you need to encrypt it. Encryption is widely used on the Internet these days—when you are making a purchase on Amazon or another retailer, your computer encrypts your credit card information before it gets transferred over the Internet. Should you encrypt the data on your computer? The answer is yes. What if your computer gets stolen? You might say that your computer is password protected. Well, breaking into a password-protected computer is extremely easy. Should you then encrypt only your files, or should you encrypt the entire drive? If you only encrypt your files, if your computer gets stolen, the criminal will not be able to read your files but will still be able to install anything on your computer and see all the nonencrypted files. If you encrypt the entire drive, it would make it extremely difficult for anybody even to boot your computer without the password. However, if you ever forget your password or your drive gets corrupted, you probably wouldn’t be able to retrieve your data files at all. Therefore, we suggest that you only encrypt your files rather than the entire drive. In this activity, we introduce TrueCrypt, a free, open source software that can be used on Windows/Mac OS and Linux. Here is what you need to do to download TrueCrypt: 1. Go to http://www.truecrypt.org/downloads and download the version of the software for your current operating system. 2. Once it is downloaded, install it. Accept the license terms and accept the default settings that the program offers you. 3. If this is the first time you are using TrueCrypt, accept the suggestion to read the Beginner’s Tutorial. You can find this tutorial here: http://www.truecrypt.org/docs/tutorial Now you are ready to encrypt files on your computer. Here is a step-by-step guide:1. Launch TrueCrypt. If you are using Windows, it will appear in your Start Menu. 2. Click on Create Volume. 3. The Wizard window will appear, and you should choose the first option—Create an encrypted file container. 4. Choose to create the volume within a file. TrueCrypt calls this a container. (Later you can experiment with other options that TrueCrypt allows you.) 5. Select to create the Standard TrueCrypt volume. 6. Now you need to specify where you wish the volume to be created. This will be a file that you can delete or move just like any other file. You may want to create it in My Documents and name it “Volume1.” Hit the Save button to save your volume. Click on the Next button in the Wizard window. Caution: Do not select any existing file. Selecting an existing file will not encrypt the file but overwrite it, and all your data will be lost. 7. Now select the encryption method—we suggest you go with the default, AES. 8. In this step, you need to specify the size of the container. We suggest you make it 1 MB, although you can create a larger container if you are planning on encrypting a lot of files. 9. This is the most important step—you need to select a password. Once you type and confirm your selected password, you will be allowed to click the Next button. 10. To create a strong key, move your mouse around randomly for a short period of time. Then click Next.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 350350 Chapter 11 Network Security 11. Click Exit—you have successfully created a TrueCrypt volume or file container. The Wizard will disappear now. 12. Select a drive letter (let’s say J:) where you want the container to be mounted and click Select File. 13. In the file selector, select the volume you created in Step 6—Volume1—and click Open. 14. In the TrueCrypt window, select Mount. A dialog box requesting the password you created in step 9 will appear. Enter the password and click OK. 15. You have successfully mounted the container as virtual disk J:. This virtual disk is entirely encrypted and behaves like a real disk. You can save or copy files to this disk and they will be encrypted on the fly.While encryption will not protect you against malware or somebody accessing your files if you leave your computer turned on in public spaces, it provides an additional layer of security. This Hands-On Activity is a beginner’s guide to encryption. The next Hands-On Activity shows you how to secure your email using PGP. However, there other controls you can implement on your laptop, such as encrypting your Dropbox folder or creating a decoy operating system. Now it’s up to you to learn more about the exciting world of encryption. Enjoy! Deliverable Encrypt a folder on your home computer. Show a screen shot of the encrypted folder.HANDS-ON ACTIVITY 11C Encryption Lab The purpose of this lab is to practice encrypting and decrypting email messages using a standard called PGP (Pretty Good Privacy) that is implemented in an open source software Gnu Privacy Guard. You will need to download and install the Kleopatra software on your computer from this Web site: http://ftp.gpg4win.org/ Beta/gpg4win-2.1.0-rc2.exe. For Mac OS X users, please visit this Web site: http://macgpg.sourceforge.net/. 1. Open Kleopatra. The first step in sending encrypted messages is to create your personal OpenPGP key pair—your personal private and public key. 2. Click on File and select New Certificate and then select Create a personal OpenPGP key pair and click Next. 3. Fill out your name as you want it to be displayed with your public key and the email address from which you will be sending and receiving emails. The comment window is optional and you can leave it empty. Click Next. Check and make sure that your name and email address are correctly entered. If this is the case, click the Create Key. 4. The system will now prompt you to enter a passphrase. This is your password to access your key, and it will also allow you to encrypt and decrypt messages. If the passphrase is not secureenough, the system will tell you. The quality indicator has to be green and show 100% for an acceptable passphrase. Once your passphrase is accepted, the system will prompt you to reenter the passphrase. Once this is done, Kleopatra will create your public and private key pair. 5. The next screen will indicate that a “fingerprint” of your newly created key pair is generated. This fingerprint is unique, and no one else has this fingerprint. You don’t need to select any of the next steps suggested by the system. 6. The next step is to make your public key public so that other people can send encrypted messages to you. In the Kleopatra window, right click on your certificate and select Export Certificates from the menu. Select a folder on your computer where you want to save the public key and name it YourName public key.asc. 7. To see your public key, open this file in Notepad. You should see a block of fairly confusing text and numbers. My public key is shown in Figure 11-20. To share this public key, post your asc file on the class Web site. This key should be made public, so don’t worry about sharing it. You can even post it on your own Web site so that other people can send you encrypted messages.Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 351Hands-On Activity 11C 351FIGURE 11-20Example of a public key8. Now, you should import the public key of the person with whom you want to exchange encrypted messages. Save the asc file with the public key on your computer. Then click the Import Certificates icon in Kleopatra. Select the asc file you want to import and click OK. Kleopatra will acknowledge the successful import of the public key. 9. The final step in importing the public key is to set the trust level to full trust. Left click on the certificate and from the menu select Change Owner Trust, and select “I believe checks are very accurate.” 10. Now you are ready to exchange encrypted messages! Open Webmail, Outlook, or any other email client and compose a message. Copy the text of the message into clipboard by marking it and hitting CTRL + X. Right-click the Kleopatra icon on your status bar and select Clipboard and Encrypt (Figure 11-21). Click on Add Recipient and select the person to whom you want to send this message (Figure 11-22). I will send a message to Alan. Once the recipient is selected, just click Next. Kleopatra will return a screen that Encryption was successful. 11. The encrypted message is stored in your computer’s clipboard. Open the email message window and paste (CTRL+V) the encrypted message to the body of the email. Now you are ready to send your first encrypted email!FIGURE 11-21 KleopatraEncrypting a message using12. To decrypt an encrypted message, just select the text in the email (you need to select the entire message from BEGIN PGP MESSAGE to END PGP MESSAGE). Copy the message to clipboard via CTRL+C. Right click the Kleopatra icon on your status bar, and then select Clipboard and Decrypt & Verify. This is very similar to how you encrypted the message. The decrypted message will be stored in the clipboard. To read it, just paste it to Word or any other text editor. You are done!Trimsize Trim Size: 8in x 10inFitzergald c11.tex V2 - July 25, 2014 10:09 A.M. Page 352352 Chapter 11 Network SecurityFIGURE 11-22Selecting a recipient of an encrypted messageDeliverables 1. Create your PGP key pair using Kleopatra. Post the asc file of your public key on a server/class Web site as instructed by your professor. 2. Import the certificate (public key) of your professor to Kleopatra. Send your instructor an encrypted message that contains information about your favorite food, hobbies, places to travel, and so on.3. Your professor will send you a response that will be encrypted. Decrypt the email and print its content so that you can submit a hard copy in class.Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 353C H A P T E R 12 NETWORK MANAGEMENT etwork managers perform two key tasks: (1) designing new networks and network N upgrades and (2) managing the day-to-day operation of existing networks. The prior chapters have examined network design, so this chapter focuses on day-to-day network management, discussing the things that must be done to ensure that the network functions properly, although we do discuss some special-purpose equipment designed to improve network performance. Our focus is on the network management organization and the basic functions that a network manager must perform to operate a successful network.OBJECTIVESOUTLINE◾ ◾ ◾ ◾ ◾ ◾Understand what is required to manage the day-to-day operation of networks Be familiar with the network management organization Understand configuration management Understand performance and fault management Be familiar with end user support Be familiar with cost management12.1 Introduction 12.2 Designing for Network Performance 12.2.1 Managed Networks 12.2.2 Managing Network Traffic 12.2.3 Reducing Network Traffic 12.3 Configuration Management 12.3.1 Configuring the Network and Client Computers 12.3.2 Documenting the Configuration 12.4 Performance and Fault Management 12.4.1 Network Monitoring12.4.2 Failure Control Function 12.4.3 Performance and Failure Statistics 12.4.4 Improving Performance 12.5 End User Support 12.5.1 Resolving Problems 12.5.2 Providing End User Training 12.6 Cost Management 12.6.1 Sources of Costs 12.6.2 Reducing Costs 12.7 Implications for Management Summary12.1 INTRODUCTION Network management is the process of operating, monitoring, and controlling the network to ensure it works as intended and provides value to its users. The primary objective of the data communications function is to move application-layer data from one location to another in a timely fashion and to provide the resources that allow this transfer to occur. This transfer of information may take place within a single department, between departments in an organization, or with entities outside the organization across private networks or the Internet. Without a well-planned, well-designed network and without a well-organized network management staff, operating the network becomes extremely difficult. Unfortunately, many network managers spend most of their time firefighting—dealing with breakdowns and immediate problems. If managers do not spend enough time on planning and organizing the network and networking staff, which are needed to predict and prevent problems, they are destined to be reactive rather than proactive in solving problems. 353Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 354354 Chapter 12 Network ManagementMANAGEMENT12-1 What do Network Managers do?FOCUS◾ Keep abreast of the latest technological developments in computers, data communications devices, network software, and the Internet. ◾ Keep abreast of the latest technological developments in telephone technologies and network services. ◾ Assist senior management in understanding the business implications of network decisions and the role of the network in business operations.If you were to become a network manager, some of your responsibilities and tasks would be to ◾ Manage the day-to-day operations of the network. ◾ Provide support to network users. ◾ Ensure the network is operating reliably. ◾ Evaluate and acquire network hardware, software, and services. ◾ Manage the network technical staff. ◾ Manage the network budget, with emphasis on controlling costs. ◾ Develop a strategic (long-term) networking and voice communications plan to meet the organization’s policies and goals.MANAGEMENT12-2 Five Key Management TasksFOCUSPlanning activities ◾ ◾ ◾ ◾ ◾ ◾Forecasting Establishing objectives Scheduling Budgeting Allocating resources Developing policiesOrganizing activities ◾ ◾ ◾ ◾ ◾Developing organizational structure Delegating Establishing relationships Establishing procedures Integrating the smaller organization with the larger organizationDirecting activities ◾ ◾ ◾ ◾Initiating activities Decision making Communicating MotivatingControlling activities ◾ ◾ ◾ ◾Establishing performance standards Measuring performance Evaluating performance Correcting performanceStaffing activities ◾ Interviewing people ◾ Selecting people ◾ Developing peopleOne major organizational challenge is the integration of the voice communication function with the data communications function. Traditionally, voice communications were handled by a manager in the facilities department who supervised the telephoneTrimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 355Designing for Network Performance 355switchboard systems and also coordinated the installation and maintenance of the organization’s voice telephone networks. By contrast, data communications traditionally were handled by the IT department because the staff installed their own communication circuits as the need arose, rather than coordinating with the voice communications staff. This separation of voice and data worked well over the years, but today changing communication technologies are leading most organizations to combine the functions under the IT department. Voice communications are moving to VOIP, with VOIP phones replacing traditional analog phones. We are moving from an era in which the computer system is the dominant IT function to one in which communications networks are the dominant IT function. In some organizations, the total cost of both voice and data communications will equal or exceed the total cost of the computer systems.12.2 DESIGNING FOR NETWORK PERFORMANCE At the end of the previous chapters we have discussed the best practice design for LANs, backbones, WANs, and WLANs and examined how different technologies and services offered different effective data rates at different costs. In the backbone and WAN chapters, we also examined different topologies and contrasted the advantages and disadvantages of each. So at this point, you should have a good understanding of the best choices for technologies and services and how to put them together into a good network design. In this section, we examine several higher-level concepts used to design the network for the best performance.12.2.1 Managed Networks The single most important element that contributes to the performance of a network is a managed network that uses managed devices. Managed devices are standard devices, such as switches and routers, that have small onboard computers to monitor the traffic that flows through the device as well as the status of the device and other devices connected to it. Managed devices perform their functions (e.g., routing, switching) and also record data on the traffic they process. These data can be sent to the network manager’s computer when the device receives a special control message requesting the data, or the device can send an alarm message to the network manager’s computer if it detects a critical situation such as a failing device or a huge increase in traffic. In this way, network problems can be detected and reported by the devices themselves before problems become serious. In the case of the failing network card, a managed device could record the increased number of retransmissions required to successfully transmit messages and inform the network management software of the problem. A managed switch is often able to detect the faulty transmissions from a failing network card, disable the incoming circuit so that the card could not send any more messages, and issue an alarm to the network manager. In either case, finding and fixing problems is much simpler, requiring minutes, not hours. Network Management Software A managed network requires both hardware and software: managed devices (e.g., switches, routers, APs) to monitor, collect, and transmit traffic reports and problem alerts; and network management software to store, organize, and analyze these reports and alerts. Managed devices are more expensive than unmanaged devices, because they have a CPU and software built into them. When we build a managed network, we normally buy all managed devices, rather than cutting costs by buying some managed devices and some unmanaged devices, although some organizations do install a mix of managed and unmanaged devices to cut costs. In this case, the managed devices are usuallyTrimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 356356 Chapter 12 Network ManagementFIGURE 12-1 Device management software used on Indiana University’s core backbone network.placed on the backbone and unmanaged devices in the access layer. There are three fundamentally different types of network management software. Device management software (sometimes called point management software) is designed to provide information about the specific devices on a network. It enables the network manager to monitor important devices such as servers, routers, and switches, and switches and to report configuration information, traffic volumes, and error conditions for each device. Figure 12-1 shows a sample display from a device management software package running at Indiana University. This figure shows the amount of traffic on the university’s core backbone network. This chart is in color, which is hard to see in a blackand-white book. The chart shows that traffic is generally under control, with most circuits running at 10% or less of capacity. A few circuits are running at between 20% and 50% of capacity (e.g., the circuits between br2.ictc and br2.bldc). You can see that all circuits are full duplex because there are different traffic amounts in each direction. System management software (sometimes called enterprise management software or a network management framework) provides the same configuration, traffic, and error information as device management systems but can analyze the device information to diagnose patterns, not just display individual device problems. This is important when a critical device fails (e.g., a router into a high-traffic building). With device management software, all of the devices that depend on the failed device will attempt to send warning messages to theTrimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 357Designing for Network Performance 357network administrator. One failure often generates several dozen problem reports, called an alarm storm, making it difficult to pinpoint the true source of the problem quickly. The dozens of error messages are symptoms that mask the root cause. System management software tools correlate the individual error messages into a pattern to find the true cause, which is called root cause analysis, and then report the pattern to the network manager. Rather than first seeing pages and pages of error messages, the network manager instead is informed of the root cause of the problem. Application management software also builds on the device management software, but instead of monitoring systems, it monitors applications. In many organizations, there are mission-critical applications that should get priority over other network traffic. For example, real-time order-entry systems used by telephone operators need priority over email. Application management systems track delays and problems with application layer packets and inform the network manager if problems occur. Network Management Standards One important problem is ensuring that hardware devices from different vendors can understand and respond to the messages sent by the network management software of other vendors. By this point in the book, the solution should be obvious: standards. A number of formal and de facto standards have been developed for network management. These standards are application layer protocols that define the type of information collected by network devices and the format of control messages that the devices understand. The most commonly used network management protocol is Simple Network Management Protocol (SNMP). Each SNMP device (e.g., router, switch, server) has an agent that collects information about itself and the messages it processes and stores that information in a database called the management information base (MIB). The network manager’s management station that runs the network management software has access to the MIB. Using this software, the network manager can send control messages to individual devices or groups of devices asking them to report the information stored in their MIB. Most SNMP devices have the ability for remote monitoring (RMON). Most first-generation SNMP tools reported all network monitoring information to one central network management database. Each device would transmit updates to its MIB on the server every few minutes, greatly increasing network traffic. RMON SNMP software enables MIB information to be stored on the device itself or on distributed RMON probes that store MIB information closer to the devices that generate it. The data are not transmitted to the central server until the network manager requests, thus reducing network traffic (Figure 12-2). Network information is recorded based on the data link layer protocols, network layer protocols, and application layer protocols so that network managers can get a very clear picture of the exact types of network traffic. Statistics are also collected based on network addresses so the network manager can see how much network traffic any particular computer is sending and receiving. A wide variety of alarms can be defined, such as instructing a device to send a warning message if certain items in the MIB exceed certain values (e.g., if circuit utilization exceeds 50%). As the name suggests, SNMP is a simple protocol with a limited number of functions. One problem with SNMP is that many vendors have defined their own extensions to it. So the network devices sold by a vendor may be SNMP compliant, but the MIBs they produce contain additional information that can be used only by network management software produced by the same vendor. Therefore, although SNMP was designed to make it easier to manage devices from different vendors, in practice, this is not always the case.Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 358358 Chapter 12 Network Management Network Management Console Managed Device with SNMP AgentManaged Device with SNMP AgentSwitchSwitch MIB stored on ServerManaged Device with SNMP AgentManaged Device with SNMP AgentSwitchManaged Device with SNMP Agent RouterManaged Device with SNMP AgentSwitchSwitch To Core BackboneFIGURE 12-2 Network management with Simple Network Management Protocol (SNMP). MIB = management information baseMANAGEMENT12-3 Network Management at ZF LenksystemeFOCUSZF Lenksysteme manufactures steering systems for cars and trucks. It is headquartered in southern Germany but has offices and plants in France, England, the United States, Brazil, India, China, and Malaysia. Its network has about 300 servers and 600 devices (e.g., routers, switches). ZF Lenksysteme had a network management system, but when a problem occurred with one device, nearby devices also issued their own alarms. The network management software did not recognize the interactions among the devices, and the resulting alarm storm meant that it took longer to diagnose the root cause of the problem.The new HP network management system monitors and controls the global network from one central location with only three staff. All devices and servers are part of the system, and interdependencies are well defined, so alarm storms are a thing of the past. The new system has cut costs by 50% and also has extended network management into the production line. The robots on the production line now use TCP/IP networking, so they can be monitored like any other device. Adapted from: ZF Lenksysteme, HP Case studies, hp.comTrimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 359Designing for Network Performance 35912.2.2 Managing Network Traffic Most approaches to improving network performance attempt to maximize network speed. Another approach is to manage where and how we route traffic to improve network performance. This section examines two tools designed to better manage traffic with the ultimate goal of improving network performance. Load Balancing As we mentioned in Chapter 7 on the design of the data center, servers are typically placed together in server farms or clusters, which sometimes have hundreds of servers that perform the same task. In this case, it is important to ensure that when a request arrives at the server farm, it is immediately forwarded to a server that is not busy—or is the least busy. A special device called a load balancer or virtual server acts as a traffic manager at the front of the server farm (Figure 12-3). All requests are directed to the load balancer at its IP address. When a request hits the load balancer, it forwards it to one specific server using the server’s IP address. Sometimes a simple round-robin formula is used (requests go to each server one after the other in turn); in other cases, more complex formulas track how busy each server actually is. If a server crashes, the load balancer stops sending requests to it, and the network continues to operate without the failed server. Load balancing makes it simple to add servers (or remove servers) without affecting users. You simply add or remove the server(s) and change the software configuration in the load balancer no one is aware of the change. Policy-Based Management With policy-based management (sometimes called application shaping or traffic shaping), the network manager uses special software to set priority policies for network traffic that take effect when the network becomes busy. For example, the network manager might say that order processing and videoconferencing get the highest priority (order processing because it is the lifeblood of the company and videoconferencing because poor response time will have the greatest impact on it). The policy management is usually implemented as a combination of hardware and software. A special traffic-shaping device is installed at a key point (usually between a building backbone and the campus backbone). The software to manage this device also configures FIGURE 12-3 Network with load balancerServer farmSwitchBackbone Load balancerSwitch SwitchSwitchTrimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 360360 Chapter 12 Network Management the network devices behind it using the quality of service (QoS) capabilities in TCP/IP and/or VLANs to give certain applications the highest priority when the devices become busy. Policy-based management requires managed devices that support QoS.12.2.3 Reducing Network Traffic A more radical approach to improving performance is to reduce the amount of traffic on the network. This may seem quite difficult at first glance—after all, how can we reduce the number of Web pages people request? We can’t reduce all types of network traffic, but if we limit high-capacity users and move the most commonly used data closer to the users who need it, we can reduce traffic enough to have an impact on network performance. This section discusses three different tools that can be used. Capacity Management Capacity management devices, sometimes called bandwidth limiter or bandwidth shapers, monitor traffic and can slow down traffic from users who consume a lot of network capacity. Capacity management is related to policy-based management but is simpler in that it only looks at the source of the traffic (i.e., the source IP address) rather than the nature of the traffic (e.g., videoconferencing, email, Web pages). These devices are installed at key points in the network, such as between a backbone and the core network. Figure 12-4 shows the control panel for one device made by NetEqualizer.FIGURE 12-4Capacity management softwareTrimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 361Designing for Network Performance 361FIGURE 12-5 Network with content engineContent engine SwitchInternet Router SwitchSwitchContent Caching The basic idea behind content caching is to store other people’s Web data closer to your users. With content caching, you install a content engine (also called a cache engine) close to your Internet connection and install special content management software on the router (Figure 12-5). The router directs all outgoing Web requests and the files that come back in response to those requests to the cache engine. The content engine stores the request and the static files that are returned in response (e.g., graphics files, banners). The content engine also examines each outgoing Web request to see if it is requesting static content that the content engine has already stored. If the request is for content already in the content engine, it intercepts the request and responds directly itself with the stored file but makes it appear as though the request came from the URL specified by the user. The user receives a response almost instantaneously and is unaware that the content engine responded. The content engine is transparent. Although not all Web content will be in the content engine’s memory, content from many of the most commonly accessed sites on the Internet will be (e.g., yahoo.com, google.com, amazon.com). The contents of the content engine reflect the most common requests for each individual organization that uses it and changes over time as the pattern of pages and files changes. Each page or file also has a limited life in the cache before a new copy is retrieved from the original source so that pages that occasionally change will be accurate. By reducing outgoing traffic (and incoming traffic in response to requests), the content engine enables the organization to purchase a smaller WAN circuit into the Internet. So not only does content caching improve performance, but it can also reduce network costs if the organization produces a large volume of network requests. Content Delivery Content delivery, pioneered by Akamai,1 is a special type of Internet service that works in the opposite direction. Rather than storing other people’s Web files closer to their own internal users, a content delivery provider stores Web files for its clients closer to their potential users. Akamai, for example, operates almost 10,000 Web servers located near the busiest Internet IXPs and other key places around the Internet. 1 Akamai(pronounced AH-kuh-my) is Hawaiian for “intelligent,” “clever,” and “cool.” See www.akamai.com.Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 362362 Chapter 12 Network Management These servers contain the most commonly requested Web information for some of the busiest sites on the Internet (e.g., yahoo.com, monster.com, ticketmaster.com). MANAGEMENT12-4 Load Balancing at Bryam HealthcareFOCUSBryam Healthcare is a medical supply company serving more than 300,000 customers from 17 operating centers. When its sales representatives began complaining about the slow response times for email, Web, and other key applications, Anthony Acquanita, Byram’s network manager, realized that the network architecture had reached its limits. The old architecture was a set of four servers each running specific applications (e.g., one email server, one Web server). At different points in the week, a different server would become overloaded and provide slow response times for a specific application—the email server first thing Monday morning as people checked their email after the weekend, for example. The solution was to install a load balancing switch in front of the servers and install all the major applicationson all the servers. This way, when the demand for one application peaks, there are four servers available rather than one. Because the demand for different applications peaks at different times, the result has been dramatically improved performance, without the need to buy new servers. The side benefit is that it is now simple to remove one server from operations at nonpeak times for maintenance or software upgrades without the users noticing (whereas in the past, server maintenance meant disabling an application [e.g., email] for a few hours while the server was worked on). Adapted from: “Load Balancing Boosts Network,” Communications News, November 2005, pp. 40–42.When someone accesses a Web page of one of Akamai’s customers, special software on the client’s Web server determines if there is an Akamai server containing any static parts of the requested information (e.g., graphics, advertisements, banners) closer to the user. If so, the customer’s Web server redirects portions of the request to the Akamai server nearest the user. The user interacts with the customer’s Web site for dynamic content or HTML pages with the Akamai server providing static content. In Figure 12-6, for example, when a user in Singapore requests a Web page from yahoo.com, the main yahoo.com server farm responds with the dynamic HTML page. This page contains several static graphic files. Rather than provide an address on the yahoo.com site, the Web page is dynamically changed by the Akamai software on the yahoo.com site to pull the static content from the Akamai server in Singapore. If you watch the bottom action bar closely on your Web browser while some of your favorite sites are loading, you’ll see references to Akamai’s servers. On any given day, 15%–20% of all Web traffic worldwide comes from an Akamai server. Akamai servers benefit both the users and the organizations that are Akamai’s clients, as well as many ISPs and all Internet users not directly involved with the Web request. Because more Web content is now processed by the Akamai server and not the client organization’s more distant Web server, the user benefits from a much faster response time; in Figure 12-6, for example, more requests never have to leave Singapore. The client organization benefits because it serves its users with less traffic reaching its Web server; Yahoo!, for example, need not spend as much on its server farm or the Internet connection into its server farm. In our example, the ISPs providing the circuits across the Pacific benefit because now less traffic flows through their network—traffic that is not paid for because of Internet peering agreements. Likewise, all other Internet users in Singapore (as well as users in the United States accessing Web sites in Singapore) benefit because there is now less traffic across the Pacific and response times are faster.Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 363Configuration Management 363FIGURE 12-6 Network with content deliveryHTTP Request California HTTP Response with Web PageHTTP Requests for Static content redirected to local serverSingaporeMANAGEMENT12-5 Content Delivery at Best BuyFOCUSBest Buy operates more than 1,150 retail electronic stores across the United States and Canada and has an extensive online Web store offering more than 600,000 products. Its Web store hosts more than 4,000 million visits a year, more than all of its 1,150 physical stores combined. Best Buy wanted to improve its Web store to better customer experience and reduce operating costs. Akamai’s extensive content delivery presence in NorthAmerica enabled Best Buy to improve the speed of its Web transactions by 80%, resulting in substantial increases in sales. The shift to content delivery has also reduced the traffic to its own servers by more than 50%, reducing its operating costs. Adapted from: Akamai Helps Best Buy, Akamai case studies, akamai.com12.3 CONFIGURATION MANAGEMENT We now turn our attention to the four basic management tasks that comprise network management. The first is configuration management. Configuration management means managing the network’s hardware and software configuration, documenting it, and ensuring it is updated as the configuration changes.12.3.1 Configuring the Network and Client Computers One of the most common configuration activities is adding and deleting user accounts. When new users are added to the network, they are usually categorized as being a member of some group of users (e.g., faculty, students, accounting department, personnel department).Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 364364 Chapter 12 Network Management Each user group has its own access privileges, which define what file servers, directories, and files they can access and provide a standard log-in script. The log-in script specifies what commands are to be run when the user first logs in (e.g., setting default directories, connecting to public disks, running menu programs). Another common activity is updating the software on the client computers attached to the network. Every time a new application system is developed or updated (or, for that matter, when a new version is released), each client computer in the organization must be updated. Traditionally, this has meant that someone from the networking staff has had to go to each client computer and manually install the software, either from CDs or by downloading over the network. For a small organization, this is time consuming but not a major problem. For a large organization with hundreds or thousands of client computers (possibly with a mixture of Windows and Apples), this can be a nightmare. Desktop management, sometimes called electronic software delivery or automated software delivery, is one solution to the configuration problem. Desktop management enables network managers to install software on client computers over the network without physically touching each client computer. Most desktop management packages provide application-layer software for the network server and all client computers. The server software communicates directly with the desktop management software on the clients and can be instructed to download and install certain application packages on each client at some predefined time (e.g., at midnight on a Saturday). Microsoft and many antivirus software vendors use this approach to deliver updates and patches to their software. Desktop management greatly reduces the cost of configuration management over the long term because it eliminates the need to update each and every client computer manually. It also automatically produces and maintains accurate documentation of all software installed on each client computer and enables network managers to produce a variety of useful reports. However, desktop management increases costs in the short term because it costs money (typically $25 to $50 per client computer) and requires network staff to install it manually on each client computer. Desktop Management Interface (DMI) is the emerging standard for desktop management.12.3.2 Documenting the Configuration Configuration documentation includes information about network hardware, network software, user and application profiles, and network documentation. The most basic information about network hardware is a set of network configuration diagrams that document the number, type, and placement of network circuits (whether organization owned or leased from a common carrier), network servers, network devices (e.g., hubs, routers), and client computers. For most organizations, this is a large set of diagrams: one for each LAN, BN, and WAN. Figure 12-7 shows a diagram of network devices in one office location. These diagrams must be supplemented by documentation on each individual network component (e.g., circuit, hub, server). Documentation should include the type of device, serial number, vendor, date of purchase, warranty information, repair history, telephone number for repairs, and any additional information or comments the network manager wishes to add. For example, it would be useful to include contact names and telephone numbers for the individual network managers responsible for each separate LAN within the network and common carrier telephone contact information. (Whenever possible, establish a national account with the common carrier rather than dealing with individual common carriers in separate states and areas.) A similar approach can be used for network software. This includes the network operating system and any special-purpose network software. For example, it is important to recordTrimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 365Configuration Management 3654th-Floor Wiring Closet192.168.40.10 33003rd-Floor Wiring Closet192.168.30.10 33002nd-Floor Wiring Closet192.168.20.10 3300192.168.40.20 3300192.168.30.20 3300192.168.20.20 3300FIGURE 12-7Seattle Boston Atlanta London Toronto192.168.10.01Web_Server Island_Server192.168.10.16Mail_Server Reprise _Server192.168.10.15Domain_Server Backup_Server192.168.10.14SQL_Server Virgin_Server192.168.10.131st-Floor Computer RoomLegend T1, T3 10 Gbps 1 GbpsNetwork configuration diagramwhich network operating system with which version or release date is installed on each network server. The same is true of application software. Sharing software on networks can greatly reduce costs, although it is important to ensure that the organization is not violating any software license rules. Software documentation can also help in negotiating site licenses for software. Many users buy software on a copy-by-copy basis, paying the retail price for each copy. It may be cheaper to negotiate the payment of one large fee for an unlimited-use license for widely used software packages instead of paying on a per-copy basis. The third type of documentation is the user and application profiles, which should be automatically provided by the network operating system or additional vendor or third-party software agreements. These should enable the network manager to easily identify the files and directories to which each user has access and each user’s access rights (e.g., read-only, edit, delete). Equally important is the ability to access this information in the “opposite” direction, that is, to be able to select a file or directory and obtain a list of all authorized users and their access rights. In addition, other documentation must be routinely developed and updated pertaining to the network. This includes network hardware and software manuals, application softwareTrimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 366366 Chapter 12 Network Management manuals, standards manuals, operations manuals for network staff, vendor contracts and agreements, and licenses for software. The documentation should include details about performance and fault management (e.g., preventive maintenance guidelines and schedules, disaster recovery plan, and diagnostic techniques), end user support (e.g., applications software manuals, vendor support telephone numbers), and cost management (e.g., annual budgets, repair costs for each device). The documentation should also include any legal requirements to comply with local or federal laws, control, or regulatory bodies. Maintaining documentation is usually a major issue for most organizations. Have you written programs? How well did you document them? Many technicians hate documentation because it is not “fun” and doesn’t provide immediate value the same way that solving problems does. Therefore, it is often overlooked, so when someone leaves the organization, the knowledge of the network leaves with him or her.12.4 PERFORMANCE AND FAULT MANAGEMENT Performance management means ensuring the network is operating as efficiently as possible, whereas fault management means preventing, detecting, and correcting faults in the network circuits, hardware, and software (e.g., a broken device or improperly installed software). Fault management and performance management are closely related because any faults in the network reduce performance. Both require network monitoring, which means keeping track of the operation of network circuits and devices to ensure they are functioning properly and to determine how heavily they are used.12.4.1 Network Monitoring Most large organizations and many smaller ones use network management software to monitor and control their networks. One function provided by these systems is to collect operational statistics from the network devices. For small networks, network monitoring is often done by one person, aided by a few simple tools. In large networks, network monitoring becomes more important. Large networks that support organizations operating 24 hours a day are often mission critical, which means a network problem can have serious business consequences. For example, consider the impact of a network failure for a common carrier such as AT&T or for the air traffic control system. These networks often have a dedicated network operations center (NOC) that is responsible for monitoring and fixing problems. Such centers are staffed by a set of skilled network technicians that use sophisticated network management software. When a problem occurs, the software immediately detects the problems and sends an alarm to the NOC. Staff members in the NOC diagnose the problem and can sometimes fix it from the NOC (e.g., restarting a failed device). Other times, when a device or circuit fails, they must change routing tables to route traffic away from the device and dispatch a technician to fix it.A Day in the Life: Network Policy Manager All large organizations have formal policies for the use of their networks (e.g., wireless LAN access, password, server space). Most large organizations have a special policy group devoted to the creation of network policies, many of which are devoted to network security. The job of the policy officer is to steer the policy through the policy-making process and ensure that all policies are in the best interests of the organization as a whole. Although policies are focused inside the organization, policies are influenced byTrimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 367Performance and Fault Management 367events both inside and outside the organization. The policy manager spends a significant amount of time working with outside organizations such as the U.S. Department of Homeland Security, CIO and security officer groups, and industry security consortiums. The goal is to make sure all policies (especially security policies) are up to date and provide a good balance between costs and benefits. A typical policy begins with networking staff writing a summary containing the key points of the proposed policy. The policy manager takes the summary and uses it to develop a policy that fits the structure required for organizational policies (e.g., date, rationale, scope, responsible individuals, and procedures). This policy manager works with the originating staff to produce an initial draft of the proposed policy. Once everyone in the originating department and the policy office are satisfied with the policy, it is provided to an advisory committee of network users and network managers for discussion. Their suggestions are then incorporated into the policy, or an explanation is provided as to why the suggestions will not be incorporated in the policy. After several iterations, a policy becomes a draft policy and is posted for comment from all users within the organization. Comments are solicited from interested individuals, and the policy may be revised. Once the draft is finalized, the policy is then presented to senior management for approval. Once approved, the policy is formally published, and the organization charged with implementing the policy begins to use it to guide its operations. Source: With thanks to Mark BruhnMANAGEMENT12-6 Network Management SalariesFOCUSNetwork management is not easy, but it doesn’t pay too badly. Here are some typical jobs and their respective annual salaries: Network Vice President Network Manager Telecom Manager LAN Administrator WAN Administrator Network Designer Network Technician Technical Support Staff Trainer$150,000 90,000 77,000 70,000 75,000 80,000 60,000 50,000 50,000Figure 12-8 shows part of the NOC at Indiana University (this is only about one-third of it). The NOC is staffed 24 hours a day, 7 days a week to monitor the university’s networks. The NOC also has responsibility for managing portions of several very high-speed networks, including Internet2 (see Management Focus Box 12-7).Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 368368 Chapter 12 Network ManagementFIGURE 12-8 Part of the Network Operations Center at Indiana University. Photo courtesy of the author, Alan DennisSome types of management software operate passively, collecting the information and reporting it back to the central NOC. Others are active, in that they routinely send test messages to the servers or application being monitored (e.g., an HTTP Web page request) and record the response times. The network management software discussed in Section 12.2.2 is commonly used for network monitoring. Performance tracking is important because it enables the network manager to be proactive and respond to performance problems before users begin to complain. Poor network reporting leads to an organization that is overburdened with current problems and lacks time to address future needs. Management requires adequate reports if it is to address future needs.12.4.2 Failure Control Function Failure control requires developing a central control philosophy for problem reporting, whether the problems are first identified by the NOC or by users calling in to the NOC or a help desk. Whether problem reporting is done by the NOC or the help desk, the organization should maintain a central telephone number for network users to call when any problem occurs in the network. As a central troubleshooting function, only this group or its designee should have the authority to call hardware or software vendors or common carriers. Many years ago, before the importance (and cost) of network management was widely recognized, most networks ignored the importance of fault management. Network devices were “dumb” in that they did only what they were designed to do (e.g., routing packets) but did not provide any network management information.Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 369Performance and Fault Management 369For example, suppose a network interface card fails and begins to transmit garbage messages randomly. Network performance immediately begins to deteriorate because these random messages destroy the messages transmitted by other computers, which need to be retransmitted. Users notice a delay in response time and complain to the network support group, which begins to search for the cause. Even if the network support group suspects a failing network card (which is unlikely, unless such an event has occurred before), locating the faulty card is very difficult and time consuming. Most network managers today are installing managed devices that perform their functions (e.g., routing, switching) and also record data on the messages they process (see Section 12.2.1). Finding and fixing the fault is much simpler, requiring minutes, not hours.MANAGEMENT12-7 Internet2 Weather MapFOCUSInternet2 is a high-performance backbone that connects about 400 Internet2 institutions in more than 100 countries. The current network is primarily a 10 Gbps fiber-optic network. The network is monitored 24 hours a day, 7 days a week from the network operations center (NOC) located on the campus of Indiana University. The NOC oversees problem, configuration, and change management; network security; performance and policy monitoring; reporting; quality assurance; scheduling; and documentation. The center provides a structured environment that effectively coordinates operational activities with all participants and vendors related to the function of the network.The NOC uses multiple network management software running across several platforms. One of the tools used by the NOC that is available to the general public is the Internet2 Weather Map (noc.net.internet2.edu). Each of the major circuits connecting the major Internet2 gigapops is shown on the map. Each link has two parts, showing the utilization of the circuits to and from each pair. Adapted from: Internet2 Network NOC (noc.net.internet2 .edu)Numerous software packages are available for recording fault information (Remedy is one of the more popular ones). The reports they produce are known as trouble tickets. The software packages assist the help desk personnel so they can type the trouble report immediately into a computerized failure analysis program. They also automatically produce various statistical reports to track how many failures have occurred for each piece of hardware, circuit, or software package. Automated trouble tickets are better than paper because they allow management personnel to gather problem and vendor statistics. There are four main reasons for trouble tickets: problem tracking, problem statistics, problem-solving methodology, and management reports. Problem tracking allows the network manager to determine who is responsible for correcting any outstanding problems. This is important because some problems often are forgotten in the rush of a very hectic day. In addition, anyone might request information on the status of a problem. The network manager can determine whether the problem-solving mechanism is meeting predetermined schedules. Finally, the manager can be assured that all problems are being addressed. Problem tracking also can assist in problem resolution.Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 370370 Chapter 12 Network Management Are problems being resolved in a timely manner? Are overdue problems being flagged? Are all resources and information available for problem solving? Problem statistics are important because they are a control device for the network managers as well as for vendors. With this information, a manager can see how well the network is meeting the needs of end users. These statistics also can be used to determine whether vendors are meeting their contractual maintenance commitments. Finally, they help to determine whether problem-solving objectives are being met. Problem prioritizing helps ensure that critical problems get priority over less important ones. For example, a network support staff member should not work on a problem on one client computer if an entire circuit with dozens of computers is waiting for help. Moreover, a manager must know whether problem-resolution objectives are being met. For example, how long is it taking to resolve critical problems? Management reports are required to determine network availability, product and vendor reliability (mean time between failures), and vendor responsiveness. Without them, a manager has nothing more than a “best guess” estimate for the effectiveness of either the network’s technicians or the vendor’s technicians. Regardless of whether this information is typed immediately into an automated trouble ticket package or recorded manually in a bound notebook-style trouble log, the objectives are the same. The purposes of the trouble log are to record problems that must be corrected and to keep track of statistics associated with these problems. For example, the log might reveal that there were 37 calls for software problems (3 for one package, 4 for another package, and 30 for a third software package), 26 calls for cable modem problems evenly distributed among 2 vendors, 49 calls for client computers, and 2 calls to the common carrier that provides the network circuits. These data are valuable when the design and analysis group begins redesigning the network to meet future requirements.TECHNICAL12-1 Technical ReportsFOCUS Technical reports that are helpful to network managers are those that provide summary information, as well as details that enable the managers to improve the network. Technical details include • Circuit use • Usage rate of critical hardware such as host computers, front-end processors, and servers • File activity rates for database systems • Usage by various categories of client computers• Response time analysis per circuit or per computer • Voice versus data usage per circuit • Queue-length descriptions, whether in the host computer, in the front-end processor, or at remote sites • Distribution of traffic by time of day, location, and type of application software • Failure rates for circuits, hardware, and software • Details of any network faults12.4.3 Performance and Failure Statistics Many different types of failure and recovery statistics can be collected. The most obvious performance statistics are those discussed earlier: how many packets are being moved on what circuits and what the response time is. Failure statistics also tell an important story. One important failure statistic is availability, the percentage of time the network is available to users. It is calculated as the number of hours per month the network is available divided by the total number of hours per month (i.e., 24 hours per day × 30 days per month = 720 hours). The downtime includes times when the network is unavailable because of faults and routine maintenance and network upgrades. MostTrimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 371Performance and Fault Management 371network managers strive for 99% to 99.5% availability, with downtime scheduled after normal working hours. The mean time between failures (MTBF) is the number of hours or days of continuous operation before a component fails. Obviously, devices with higher MTBF are more reliable. When faults occur, and devices or circuits go down, the mean time to repair (MTTR) is the average number of minutes or hours until the failed device or circuit is operational again. The MTTR is composed of these separate elements: MTTRepair = MTTDiagnose + MTTRespond + MTTFix The mean time to diagnose (MTTD) is the average number of minutes until the root cause of the failure is correctly diagnosed. This is an indicator of the efficiency of problem management personnel in the NOC or help desk who receive the problem report. The mean time to respond (MTTR) is the average number of minutes or hours until service personnel arrive at the problem location to begin work on the problem. This is a valuable statistic because it indicates how quickly vendors and internal groups respond to emergencies. Compilation of these figures over time can lead to a change of vendors or internal management policies or, at the minimum, can exert pressure on vendors who do not respond to problems promptly.TECHNICAL12-2 Elements of a Trouble ReportFOCUS When a problem is reported, the trouble log staff members should record the following: • Time and date of the report • Name and telephone number of the person who reported the problem • The time and date of the problem (and the time and date of the call)• • • •Location of the problem The nature of the problem When the problem was identified Why and how the problem happenedFinally, after the vendor or internal support group arrives on the premises, the last statistic is the mean time to fix (MTTF). This figure tells how quickly the staff is able to correct the problem after they arrive. A very long time to fix in comparison with the time of other vendors may indicate faulty equipment design, inadequately trained customer service technicians, or even the fact that inexperienced personnel are repeatedly sent to fix problems. For example, suppose your Internet connection at home stops working. You call your ISP, and they fix it over the phone in 15 minutes. In this case, the MTTRepair is 15 minutes, and it is hard to separate the different parts (MTTD, MTTR, and MTTF). Suppose you call your ISP and spend 60 minutes on the phone with them, and they can’t fix it over the phone; instead, the technician arrives the next day (18 hours later) and spends 1 hour fixing the problem. In this case, MTTR = 1 hour + 18 hours + 1 hour = 20 hours. The MTBF can be influenced by the original selection of vendor-supplied equipment. The MTTD relates directly to the ability of network personnel to isolate and diagnose failures and can often be improved by training. The MTTR (respond) can be influenced by showing vendors or internal groups how good or bad their response times have been in the past. The MTTF can be affected by the technical expertise of internal or vendor staff and the availability of spare parts on site.Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 372372 Chapter 12 Network ManagementTECHNICAL12-3 Management ReportsFOCUS Management-oriented reports that are helpful to network managers and their supervisors provide summary information for overall evaluation and for network planning and design. Details include:• Fault diagnosis • Whether most response times are less than or equal to 2 seconds for online real-time traffic • Whether management reports are timely and contain the most up-to-date statistics • Peak volume statistics as well as average volume statistics per circuit • Comparison of activity between today and a similar previous period• Graphs of daily/weekly/monthly usage, number of errors, or whatever is appropriate to the network • Network availability (uptime) for yesterday, the last 5 days, the last month, or any other specific period • Percentage of hours per week the network is unavailable because of network maintenance and repairAnother set of statistics that should be gathered are those collected daily by the network operations group, which uses network management software. These statistics record the normal operation of the network, such as the number of errors (retransmissions) per communication circuit. Statistics also should be collected on the daily volume of transmissions (characters per hour) for each communication circuit, each computer, or whatever is appropriate for the network. It is important to closely monitor usage rates, the percentage of the theoretical capacity that is being used. These data can identify computers/devices or communication circuits that have higher-than-average error or usage rates, and they may be used for predicting future growth patterns and failures. A device or circuit that is approaching maximum usage obviously needs to be upgraded. Such predictions can be accomplished by establishing simple quality control charts similar to those used in manufacturing. Programs use an upper control limit and a lower control limit with regard to the number of blocks in error per day or per week. Notice how Figure 12-9 identifies when the common carrier moved a circuit from one microwave channel to another (circuit B), how a deteriorating circuit can be located and fixed before it goes through the upper control limit (circuit A) and causes problems for the users, or how a temporary high rate of errors (circuit C) can be encountered when installing new hardware and software. 2000Number of blocks in errorFIGURE 12-9 Quality control chart for circuitsCircuit C had new hardware/software implemented hereCircuit B moved to a new microwave channelB 1100 A Circuit A is deteriorating800C5000Upper control limit (1,100)012345678 Weeks910 11 12 13 14 15 16Lower control limit (500)Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 373End User Support 37312.4.4 Improving Performance The chapters on LANs, BNs, and WANs discussed several specific actions that could be taken to improve network performance for each of those types of networks. There are also several general activities to improve performance that cut across the different types of networks.TECHNICAL12-4 Inside a Service-Level AgreementFOCUS There are many elements to a solid service-level agreement (SLA) with a common carrier. Some of the important ones include • Network availability, measured over a month as the percentage of time the network is available (e.g., [total hours—hours unavailable]/total hours) should be at least 99.5%. • Average round-trip permanent virtual circuit (PVC) delay, measured over a month as the number of seconds it takes a message to travel over the PVC from sender to receiver, should be less than 110 milliseconds, although some carriers will offer discounted services for SLA guarantees of 300 milliseconds or less. • PVC throughput, measured over a month as the number of outbound packets sent over a PVC divided by the inbound packets received at thedestination (not counting packets over the committed information rate, which are discard eligible), should be above 99%—ideally, 99.99%. • Mean time to respond, measured as a monthly average of the time from inception of trouble ticket until repair personnel are on site, should be 4 hours or less. • Mean time to fix, measured as a monthly average of the time from the arrival of repair personnel on site until the problem is repaired, should be 4 hours or less. Adapted from: “Carrier Service-Level Agreements,” International Engineering Consortium Tutorial, www.iec.org, FebruaryMost organizations establish service-level agreements (SLAs) with their common carriers and Internet service providers. An SLA specifies the exact type of performance and fault conditions that the organization will accept. For example, the SLA might state that network availability must be 99% or higher and that the MTBF for T1 circuits must be 120 days or more. In many cases, SLA includes maximum allowable response times. The SLA also states what compensation the service provider must provide if it fails to meet the SLA. Some organizations are also starting to use an SLA internally to define relationships between the networking group and its organizational “customers.”12.5 END USER SUPPORT Providing end user support means solving whatever problems users encounter while using the network. There are three main functions within end user support: resolving network faults, resolving user problems, and training. We have already discussed how to resolve network faults, and now we focus on resolution of user problems and end user training.12.5.1 Resolving Problems Problems with user equipment (as distinct from network equipment) usually stem from three major sources. The first is a failed hardware device. These are usually the easiest to fix. A network technician simply fixes the device or installs a new part. The second type of problem is a lack of user knowledge. These problems can usually be solved by discussing the situation with the user and taking that person through the processTrimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 374374 Chapter 12 Network Management step by step. This is the next easiest type of problem to solve and can often be done by email or over the telephone, although not all users are easy to work with. Problematic users are sometimes called ID ten-T errors, written ID10T. MANAGEMENT12-8 Network Manager Job RequirementsFOCUSBeing a network manager is not easy. We reviewed dozens of job postings for the key responsibilities, skills, and education desired by employers. The responsibilities listed below were commonly mentioned. Responsibilities ◾ Determine network needs and architect solutions to address business requirements. ◾ Procure and manage vendor relations with providers of equipment and services. ◾ Deploy new network components and related network systems and services, including the creation of test plans and procedures, documentation of the operation, maintenance and administration of any new systems or services, and training. ◾ Develop, document, and enforce standards, procedures, and processes for the operation and maintenance of the network and related systems. ◾ Manage the efficiency of operations of the current network infrastructure, including analyzing network performance and making configuration adjustments as necessary. ◾ Administer the network servers and the network-printing environment. ◾ Ensure network security, including the development of applicable security, server, and desktop standards, and monitoring processes to ensure that mission critical processes are operational. ◾ Manage direct reports and contractors. This includes task assignments, performance monitoring, and regular feedback. Hire, train, evaluate, and terminate staff and contractorsunder the direction of company policies and processes. ◾ Assist business in the definition of new product/service offerings and the capabilities and features of the systems to deliver those products and services to customers.Skills required ◾ Strong technology skills in a variety of technologies ◾ LAN/WAN networking experience working with routers and switches ◾ Experience with Internet access solutions, including firewalls and VPN ◾ Network architecture design and implementation experience ◾ Information security experience ◾ Personnel management experience ◾ Project management experience ◾ Experience working in a team environment ◾ Ability to work well in an unstructured environment ◾ Excellent problem-solving and analytical skills ◾ Effective written and oral communication skillsEducation ◾ Bachelor’s degree in an information technology field ◾ Security Certification ◾ Microsoft MCSE Certification preferred ◾ Cisco CCNA Certification preferredThe third type of problem is one with the software, software settings, or an incompatibility between the software and network software and hardware. In this case, there may be a bug in the software, or the software may not function properly on a certain combination of hardware and software. Solving these problems may be difficult because they require expertise with the specific software package in use and sometimes require software upgrades from the vendor. Resolving either type of software problem begins with a request for assistance from the help desk. Requests for assistance are usually handled in the same manner asTrimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 375Cost Management 375network faults. A trouble log is maintained to document all incoming requests and the manner in which they are resolved. The staff member receiving the request attempts to resolve the problem in the best manner possible. Staff members should be provided with a set of standard procedures or scripts for soliciting information from the user about problems. In large organizations, this process may be supported by special software. There are often several levels to the problem-resolution process. The first level is the most basic. All staff members working at the help desk should be able to resolve most of these. Most organizations strive to resolve between 75% and 85% of requests at this first level in less than an hour. If the request cannot be resolved, it is escalated to the second level of problem resolution. Escalation is a normal part of the process and not something that is “bad.” Staff members who handle second-level support have specialized skills in certain problem areas or with certain types of software and hardware. In most cases, problems are resolved at this level. Some large organizations also have a third level of resolution in which specialists spend many hours developing and testing various solutions to the problem, often in conjunction with staff members from the vendors of network software and hardware.12.5.2 Providing End User Training End user training is an ongoing responsibility of the network manager. Training is a key part in the implementation of new networks or network components. It is also important to have an ongoing training program because employees may change job functions and new employees require training to use the organization’s networks. Training usually is conducted through in-class, one-on-one instruction and online self-paced courses. In-class training should focus on the 20% of the network functions that the user will use 80% of the time instead of attempting to cover all network functions. By getting in-depth instruction on the fundamentals, users become confident about what they need to do. The training should also explain how to locate additional information from online support, documentation, or the help desk.12.6 COST MANAGEMENT One of the most challenging areas of network management over the past few years has been cost management. Data traffic has been growing much more rapidly than has the network management budget, which has forced network managers to provide greater network capacity at an ever lower cost per megabyte (Figure 12-10). In this section, we examine the major sources of costs and discuss several ways to reduce them.12.6.1 Sources of Costs The cost of operating a network in a large organization can be very expensive. Figure 12-11 shows a recent cost analysis to operate the network for 1 year at Indiana University, a large Big Ten research university serving 40,000 students and 4,000 faculty and staff. This analysis includes the costs of operating the network infrastructure and standard applications such as email and the Web but does not include the costs of other applications such as course management software, registration, student services, accounting, and so on. Indiana University has a federal IT governance structure, which means that the different colleges and schools on campus also have budgets to hire staff and buy equipment for their faculty and staff. The budget in this figure omits these amounts, so the real costs are probably 50% higher than those shown. Nonetheless, this presents a snapshot of the costs of running a large network.Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 376376 Chapter 12 Network Management FIGURE 12-10 Network traffic versus network management budgets AmountNetwork trafficNetwork budgetTimeThe largest area of costs in network operations is the $7.4 million spent on WAN circuits. Indiana University operates many high-speed networks (including Internet2), so these costs are higher than might be expected. This figure also shows the large costs of email, Web services, data storage, and security. The cost of end user support is the next largest cost item. This includes training as well as answering users’ questions and fixing their problems. The remaining costs are purchasing new and replacement hardware and software. But, once again, remember that this does not include the hardware and software purchased by individual colleges and schools for their faculty and staff, which does not come from the central IT budget. The total cost of ownership (TCO) is a measure of how much it costs per year to keep one computer operating. TCO includes the actual direct cost of repair parts, software upgrades, and support staff members to maintain the network, install software, administer the network (e.g., create user IDs, back up user data), provide training and technical support, and upgrade hardware and software. It also includes the indirect cost of time “wasted” by the user when problems occur, when the network is down, or when the user is attempting to learn new software. Several studies over the past few years by Gartner Group, Inc., a leading industry research firm, suggest that the TCO of a computer is astoundingly high. Most studies suggest that the TCO for typical Windows computers on a network is about $7,000 per computer per year. In other words, it costs almost five times as much each year to operate a computer than it does to purchase it in the first place. Other studies by firms such as IBM and Information Week, an industry magazine, have produced TCO estimates of between $5,000 and $10,000 per year, suggesting that the Gartner Group’s estimates are reasonable. Although TCO has been accepted by many organizations, other firms argue against the practice of including indirect in the calculation. For example, using a technique that includes indirect, the TCO of a coffee machine is more than $50,000 per year—not counting the cost of the coffee or supplies. The assumption that getting coffee “wastes” 12 minutes per day multiplied by 5 days per week yields 1 hour per week, or about 50 hours per year, of wasted time. If you assume the coffeepot serves 20 employees who have an average cost of $50 per hour (not an unusually high number), you have a loss of $50,000 per year. Some organizations, therefore, prefer to focus on costing methods that examine only the direct costs of operating the computers, omitting softer indirect costs such as “wasted” time. Such measures, often called network cost of ownership (NCO) or real TCO, haveTrimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 377Cost Management 377Network Operations Account Administration Authentication Services Directory Services Infrastructure (incl DHCP, DNS) E-mail and Messaging Mainframe and Cluster Operations Mass Data Storage Policy Management Printing Security Administration WAN Operations Web Services End User Support Departmental Technology Support Instructional Technology Support Student Residence Halls Support Student Technology Centers Support Support Center (Help Desk) Training and Education Client Hardware Classroom Technology Equipment and Supplies Student Residence Halls Equipment and Supplies Student Technology Centers Equipment and Supplies Application Software Software Site Licenses Student Residence Halls Software Student Technology Centers Software TotalFIGURE 12-11$14,871,000 275,000 257,000 746,000 1,434,000 633,000 1,424,000 75,000 201,000 1,270,000 7,410,000 1,146,000 $6,544,000 553,000 856,000 279,000 1,288,000 2,741,000 827,000 $3,901,000 844,000 601,000 2,456,000 $3,729,000 2,540,000 146,000 1,043,000 $29,045,000Annual networking costs at Indiana Universityfound that NCO ranges between $1,500 and $3,500 per computer per year. The typical network management group for a 100-user network would therefore have an annual budget of about $150,000 to $350,000. The most expensive item is personnel (network managers and technicians), which typically accounts for 50% to 70% of total costs. The second most expensive cost item is WAN circuits, followed by hardware upgrades and replacement parts. Calculating TCO for univerisites can be difficult. Do we calculate TCO for the number of computers or the number of users? Figure 12-11 shows an annual cost of $29 million. If we use the number of users, the TCO is about $659 ($29 million divided by 44,000 users). If we use the number of computers, TCO is $4,800 ($29 million divided by about 6,000 computers owned by the university). There is one very important message from this pattern of costs. Because the largest cost item is personnel time, the primary focus of cost management lies in designing networks and developing policies to reduce personnel time, not to reduce hardware cost. Over the long term, it makes more sense to buy more expensive equipment if it can reduce the cost of network management. Figure 12-12 shows the average breakdown of personnel costs by function. The largest time cost (where staff members spend most of their time) is systems management, whichTrimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 378378 Chapter 12 Network Management FIGURE 12-12 Network management personnel costsSystems managementOther 38%5% 8%Application software9% Network devices31% 9%End user supportClient computersincludes configuration, fault, and performance management tasks that focus on the network as a whole. The second largest item is end user support. Network managers usually find it difficult to manage their budgets because networks grow so rapidly. They often find themselves having to defend ever-increasing requests for more equipment and staff. To counter these escalating costs, many large organizations have adopted charge-back policies for users of WANs and mainframe-based networks. (A charge-back policy attempts to allocate the costs associated with the network to specific users.) These users must “pay” for their network usage by transferring part of their budget allocations to the network group. Such policies are seldom used in LANs, making one more potential cultural difference between network management styles.12.6.2 Reducing Costs Given the huge amounts in TCO or even the substantial amounts spent in NCO, there is considerable pressure on network managers to reduce costs. Figure 12-13 summarizes five steps to reduce network costs. The first and most important step is to develop standards for client computers, servers, and network devices (i.e., switches, routers). These standards define one configuration (or a small set of configurations) that are permitted for all computers and devices. Standardizing hardware and software makes it easier to diagnose and fix problems. Also, there are fewer software packages for the network support staff members to learn. The downside, of course, is that rigid adherence to standards reduces innovation.FIGURE 12-13 Reducing network costsFive Steps to Reduce Network Costs • Develop standard hardware and software configurations for client computers and servers. • Automate as much of the network management function as possible by deploying a solid set of network management tools. • Reduce the costs of installing new hardware and software by working with vendors. • Centralize help desks. • Move to thin-client or cloud-based architectures.Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 379Cost Management 379sss MANAGEMENT12-9 Total Cost of Ownership in MinnesotaFOCUSTotal cost of ownership (TCO) has come to the classroom. As part of a national TCO initiative, several school districts, including one in Minnesota, recently conducted a real TCO analysis. The school district was a system of eight schools (one high school, one middle school, and six elementary schools) serving 4,100 students in kindergarten through grade 12. All schools are connected via a frame relay WAN to the district head office. Costs were assessed in two major groups: direct costs and indirect costs. The direct costs included the costs of hardware (replacement client computers, servers, networks, and printers and supplies), software, internal network staff, and external consultants. The indirect costs included staff training and development. “Wasted time” was not included in the TCO analysis. The district examined its most recent annual budget and allocated its spending into these categories. The district calculated that it spent about $1.2 million per yearFIGURE 12-14 Total cost of ownership (per client computer per year) for a Minnesota school districtto support its 1,200 client computers, providing a TCO of about $1,004 per client computer per year. Figure 12-14 provides a summary of the costs by category. A TCO of $1,004 is below average, indicating a well-managed network. The district had implemented several network management best practices, such as using a standardized set of software, using new standardized hardware, and providing professional development to teachers to reduce support costs. One other major contributing factor was the extremely low salaries paid to the IT technical staff (less than $25,000 per year) because of the district’s rural location. Had the district been located in a more urban area, IT staff costs would have doubled, bringing TCO closer to the lower end of the national average. Adapted from: “Minnesota District Case Study,” Taking TCO to the Classroom, k12tco.gartner.comConsultants ($33, 3%)IT Staff ($451, 36%)Replacement Hardware ($247, 25%)Software ($52, 4%) Indirect Costs ($221, 18%)Client Computers ($201, 20%)Servers ($29, 3%) Network ($6, 1%) Supplies ($11, 1%)The second most important step is to automate as much of the network management process as possible. Desktop management can significantly reduce the cost to upgrade when new software is released. It also enables faster installation of new computers and faster recovery when software needs to be reinstalled and helps enforce the standards policies. The use of network management software to identify and diagnose problems can significantly reduce time spent in performance and fault management. Likewise, help desk software can cut the cost of the end support function. A third step is to do everything possible to reduce the time spent installing new hardware and software. The cost of a network technician’s spending half a day to install and configure new computers is often $300 to $500. Desktop management is an important stepTrimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 380380 Chapter 12 Network Management to reducing costs, but careful purchasing can also go a long way. The installation of standard hardware and software (e.g., Microsoft Office) by the hardware vendor can significantly reduce costs. Likewise, careful monitoring of hardware failures can quickly identify vendors of less reliable equipment who should be avoided in the next purchasing cycle. Traditionally, help desks have been decentralized into user departments. The result is a proliferation of help desks and support staff members, many of whom tend to be generalists rather than specialists in one area. Many organizations have found that centralizing help desks enables them to reduce the number of generalists and provide more specialists in key technology areas. This results in faster resolution of difficult problems. Centralization also makes it easier to identify common problems occurring in different parts of the organization and take actions to reduce them. Finally, many network experts argue that moving to thin-client or cloud-based architectures, just Web browsers on the client (see Chapter 2), can significantly reduce costs. Although this can reduce the cost to buy software, the real saving lies in the support costs. Because they are restricted to a narrow set of functions and generally do not need software installations, thin-client architectures become much easier to manage. TCO and NCO drop by 20% to 40%. Most organizations anticipate using thin-client and cloud-based architectures selectively, in areas where applications are well defined and can easily be restricted.12.7 IMPLICATIONS FOR MANAGEMENT Network management is one of the more challenging functions because it requires a good understanding of networking technologies, an ability to work with end users and management, and an understanding of the key elements driving networking costs. Normally no one notices it until something goes wrong. As demand for network capacity increases, the costs associated with network management have typically increased in most organizations. Justifying these increased costs to senior management can be challenging because senior management often do not see greatly increasing amounts of network traffic—all they see are increasing costs. The ability to explain the business value of networks in terms understandable to senior management is an important skill. As networks become larger and more complex, network management will increase in complexity. New technologies for managing networks will be developed, as vendors attempt to increase the intelligence of networks and their ability to “self-heal.” These new technologies will provide significantly more reliable networks but will also be more expensive and will require new skills on the part of network designers, network managers, and network technicians. Keeping a trained network staff will become increasingly difficult because once staff acquire experience with the new management tools, they will be lured away by other firms offering higher salaries—which, we suppose, is not a bad thing if you’re one of the network staff.SUMMARYDesigning for Performance Network management software is critical to the design of reliable, high-performance networks. This software provides statistics about device utilizations and issues alerts when problems occur. SNMP is a common standard for network management software and the managed devices that support it. Load balancing, and policy-based management are tools used to better manage the flow of traffic. Capacity management, content caching, and content delivery are sometimes used to reduce network traffic.Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 381Key terms 381Configuration Management Configuration management means managing the network’s hardware and software configuration, documenting it, and ensuring the documentation is updated as the configuration changes. The most common configuration management activity is adding and deleting user accounts. The most basic documentation about network hardware is a set of network configuration diagrams, supplemented by documentation on each individual network component. A similar approach can be used for network software. Desktop management plays a key role in simplifying configuration management by automating and documenting the network configurations. User and application profiles should be automatically provided by the network and desktop management software. There is a variety of other documentation that must be routinely developed and updated, including users’ manuals and organizational policies.Performance and Fault Management Performance management means ensuring the network is operating as efficiently as possible. Fault management means preventing, detecting, and correcting any faults in the network circuits, hardware, and software. The two are closely related because any faults in the network reduce performance and because both require network monitoring. Today, most networks use a combination of managed devices to monitor the network and issue alarms and a help desk to respond to user problems. Problem tracking allows the network manager to determine problem ownership or who is responsible for correcting any outstanding problems. Problem statistics are important because they are a control device for the network operators as well as for vendors.Providing End User Support Providing end user support means solving whatever network problems users encounter. Support consists of resolving network faults, resolving software problems, and training. Software problems often stem from lack of user knowledge, fundamental problems with the software, or an incompatibility between the software and the network’s software and hardware. There are often several levels to problem resolution. End user training is an ongoing responsibility of the network manager. Training usually has two parts: in-class instruction and the documentation and training manuals that the user keeps for reference.Cost Management As the demand for network services grows, so does its cost. The TCO for typical networked computers is about $7,000 per year per computer, far more than the initial purchase price. The network management cost (omitting “wasted” time) is between $1,500 and $3,500 per year per computer. The largest single cost item is staff salaries. The best way to control rapidly increasing network costs is to reduce the amount of time taken to perform management functions, often by automating as many routine ones as possible.KEY TERMS alarm, 355 alarm storm, 357 application management software, 357 application shaping, 359 availability, 370 bandwidth limiter, 360 bandwidth shaper, 360 capacity management, 360 charge-back policy, 378configuration management, 363 content caching, 361 content delivery, 361 content delivery provider, 361 cost management, 375 desktop management, 364 device management software, 356downtime, 370 end user support, 373 fault management, 366 firefighting, 353 help desk, 368 load balancer, 359 managed device, 355 managed network, 355 management information base (MIB), 357mean time between failures (MTBF), 371 mean time to diagnose (MTTD), 371 mean time to fix (MTTF), 371 mean time to repair (MTTR), 371 mean time to respond (MTTR), 371Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 382382 Chapter 12 Network Management monitor, 366 network cost of ownership (NCO), 376 network documentation, 364 network management, 353 network management software, 357network monitoring, 366 network operations center (NOC), 366 performance management, 366 problem statistics, 370 problem tracking, 369 quality control chart, 372remote monitoring (RMON), 357 root cause analysis, 357 service-level agreement (SLA), 373 Simple Network Management Protocol (SNMP), 357system management software, 356 total cost of ownership (TCO), 376 traffic shaper, 359 trouble ticket, 369 uptime, 372 virtual server, 359QUESTIONS 1. What skill does a network manager need? 2. What is firefighting? 3. Why is combining voice and data a major organizational challenge? 4. Describe what configuration management encompasses. 5. People tend to think of software when documentation is mentioned. What is documentation in a network situation? 6. What is desktop management, and why is it important? 7. What is performance and fault management? 8. What does a help desk do? 9. What do trouble tickets report? 10. Several important statistics related to network uptime and downtime are discussed in this chapter. What are they, and why are they important? 11. What is an SLA? 12. How is network availability calculated? 13. What is problem escalation? 14. What are the primary functions of end user support?15. 16. 17. 18. 19. 20.21. 22. 23. 24. 25. 26.27.What is TCO? Why is the TCO so high? How can network costs be reduced? What do network management software systems do and why are they important? What is SNMP and RMON? Compare and contrast device management software, system management software, and application management software. How does a load balancer work? What is server virtualization? What is policy-based management? What is capacity management? How does content caching differ from content delivery? How does network cost of ownership (aka real TCO) differ from total cost of ownership? Which is the most useful measure of network costs from the point of view of the network manager? Why? Many organizations do not have a formal trouble reporting system. Why do you think this is the case?EXERCISES A. What factors might cause peak loads in a network? How can a network manager determine if they are important, and how are they taken into account when designing a data communications network? B. Today’s network managers face a number of demanding problems. Investigate and discuss three major issues. C. Research the networking budget in your organization and discuss the major cost areas. Discuss several ways of reducing costs over the long term.D. Explore the traffic on the networks managed by the Indiana University NOC as noc.net.internet2.edu. Compare the volume of traffic in two networks and how close to capacity the networks are. E. Investigate the latest versions of SNMP and RMON and describe the functions that have been added in the latest version of the standard. F. Investigate and report on the purpose, relative advantages, and relative disadvantages of two network management software tools.Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 383Hands-On Activity 12A 383MINICASES I. City School District, Part 1 City School District is III. Central Textiles Central Textiles is a clothing a large, urban school district that operates 27 schools manufacturer that operates 16 plants throughout the serving 22,000 students from kindergarten through southern United States and in Latin America. The grade 12. All schools are networked into a regional Information Systems Department, which reports to WAN that connects the schools to the district central the vice president of finance, operates the central mainoffice and each other. The district has a total of 5,300 frame and LAN at the headquarters building in Sparclient computers. The table below shows the annual tanburg, South Carolina, and the WAN that connects costs. Calculate the real TCO (without wasted time). all the plants. The LANs in each plant are managed by a separate IT group at each plant that reports to the Budget Item Annual Cost plant manager (the plant managers report to the vice president of manufacturing). The telephone commuIT staff salaries $7,038,400 nications system and long-distance agreements are Consultants 1,340,900 managed by a telecommunications department in Software 657,200 the headquarters that reports to the vice president of Staff training 545,900 finance. The CEO of Central Textiles has come to you Client computers 2,236,600 asking about whether this is the best arrangement, or Servers 355,100 whether it would make more sense to integrate the Network 63,600 three functions under one new department. Outline Supplies and parts 2,114,700 the pros and cons of both alternatives. IV. Indiana University Reread Management Focus 12-5. Take another look at Figure 12-1. If this is a typical trafII. City School District, Part 2 Read and complete Minific pattern, how would you suggest that they improve case I. Examine the TCO by category. Do you think that performance? this TCO indicates a well-run network? What suggestions would you have?CASE STUDY NEXT-DAY AIR SERVICE See the Web site at www.wiley.com/college/fitzgeraldHANDS-ON ACTIVITY 12A Monitoring Solarwinds Network One of the key tasks of network management is monitoring the network to make sure everything is running well. There are many effective network monitoring tools available, and several have demonstrations you can view on the Web. One of my favorites is solarwinds.net. They have a live demonstration of their network management software available at npm.solarwinds.net. Log in with the provided guest access. Figure 12-15 shows the top portion of the demo page. It shows a map of the network with circuits and locationscolor coded. On the left side of the screen is a list of all nodes showing their status (green for good, yellow for some problems, and red for major problems), although the colors are hard to see in the figure. The bottom left part of the figure shows the busiest servers. The bottom right of this figure shows the nodes with problems, so that a network manager can quickly see problems and act to fix them. For example, the Sales switch is down.Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 384384 Chapter 12 Network ManagementFIGURE 12-15Solarwinds network management software, used with permissionFigure 12-16 shows the next part of the page after I scrolled down. We now see two pie charts on the right side that show application health, (which indicates that the software is an application management package as well as a network management package) and hardware health. You can click on any of the application or hardware categories to see which applications/hardware are in which status category. The table below these two pie charts shows the processes using the most memory, while pie chart on the right shows the busiest circuits (top five conversations). You’ll note thatthe software is also a configuration management package, because below this pie chart there is a list of the last configuration changes. Figure 12-17 shows the next part of the page. This includes the disk space that is closet to capacity and a summary of recent events. This software also integrates the help desk software, so it displays help desk requests that have not yet been completed, in order of priority. At the bottom of the screen is a weather radar map, because weather often causes network issues.Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 385Hands-On Activity 12A 385FIGURE 12-16Solarwinds network management software, used with permissionThis page is a summary page. Every element on the page can be clicked to go to the detail page to get more information about any item on the page. Deliverables 1. What problem alerts are currently displayed for the Solarwinds network?2. What are the top three nodes by CPU load? What are the top three conversations? 3. How many applications are in critical condition? Name one. 4. What is one help desk ticket that has not been completed?Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 386386 Chapter 12 Network ManagementFIGURE 12-17Solarwinds network management software, used with permissionHANDS-ON ACTIVITY 12B Monitoring AT&T’s WAN AT&T permits you to monitor their Global IP network. Go to ipnetwork.bgtmo.ip.att.net and click on Look at your world wide network. You’ll see a screen that displays all the circuits at each of the major PoPs in their global IP network. You can select a city and see the round-trip delay (from the city to the other city and back again). It also displays the percentage of packets that have been lost in transit (due either to errors or overloading of circuits).The tabs across the top of the screen (e.g., Network Delay, Network Loss, Averages) show summary data across the entire network. Deliverables 1. What is the current latency and packet loss between Dallas and Austin? 2. What is the current latency and packet loss between Phoenix and New York?Trimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 387Hands-On Activity 12C 387HANDS-ON ACTIVITY 12C Apollo Residence Network DesignDeliverablesApollo is a luxury residence hall that will serve honor students at your university. We described the residence in Hands-On Activities at the end of Chapters 7, 8, 9, 10, and 11. In this activity, we want you to revisit the LAN design (Chapter 7), backbone design (Chapter 8), WAN design (Chapter 8), Internet design (Chapter 10), and security design (Chapter 11) and then add the design for good network management (this chapter).Your team was hired to design the network for the Apollo residence. Design the entire network, including LANs, backbones, WAN, Internet, security, and network management. You will need to refer to the Hands-On Activities in Chapters 7–11 as well as this one. Figure 12-18 provides a list of possible hardware and software you can add, in addition to the equipment lists in these activities in prior chapters.Device or Software Add SNMP to any device SNMP device management software SNMP system management software SNMP application management software Load balancer (up to 10 servers) Includes management software Load balancer (up to 50 servers) Includes management software Load balancer (up to 100 servers) Includes management software Bandwidth shaper (runs at 1 Gbps) Includes management software Bandwidth shaper (runs at 10 Gbps) Includes management software Traffic shaper (runs at 1 Gbps) Includes management software Traffic shaper (runs at 10 Gbps) Includes management software Cache engine (runs at 1 Gbps) Includes management software Cache engine (runs at 10 Gbps) Includes management software Desktop management softwareFIGURE 12-18Equipment listPrice (each) $200 $2,000 $4,000 $4,000 $1,500 $2,500 $4,000 $1,000 $3,000 $10,000 $30,000 $1,000 $3,000 $1,000 plus $25 per desktopTrimsize Trim Size: 8in x 10inFitzergald c12.tex V2 - July 25, 2014 9:09 A.M. Page 388Trimsize 8in x 10in Fitzergald bindex.tex V1 - July 17, 2014 12:40 A.M. Page 389INDEX 1 GbE, 195 10 GbE, 195 10/100/1000 Ethernet, 195 1000Base-T, 195 100Base-T, 195 10Base-T, 195 40 GbE, 195 802.11a, 198 802.11ac, 198 802.11ad, 199 802.11b, 198 802.11g, 198 802.11i, 200 802.11n, 198A access cards, 337 access control list (ACL), 140, 320 access layer, 167 access point (AP), 187–9 access request technique, 93 access VPN, 258 account, 335 acknowledgment (ACK), 100 Active Directory Service (ADS), 190 active scanning, 196 adaptive differential pulse code modulation (ADPCM), 83 adaptive routing. See dynamic routing address field, 104 Address Resolution Protocol (ARP), 132 address resolution, 130–2 addressing, 124–32 address resolution, 130–2 server name resolution, 130 application layer address, 124 assigning addresses, 124–30 classless addressing, 127 data link layer address, 124 resolution, 132 dynamic addressing, 129 Internet addresses, 125 network layer address, 124 subnets, 127–8 Advanced Encryption Standard (AES), 331 adware, 329 alarm, 355 alarm storm, 357 algorithms, 329–30 American National Standards Institute (ANSI), 14 American Standard Code for Information Interchange (ASCII), 72 amplifiers, 97amplitude, 76 amplitude modulation (AM), 77 amplitude shift keying (ASK), 77 analog data, 61 analog transmission, 76 of digital data, 76–80 modulation, 77–9. See also individual entry anomaly detection, 340 antivirus software, 309 application architectures, 27–35 application logic, 27 choosing architectures, 35 client-based architectures, 28–9 client-server architectures, 29–32 cloud computing architectures, 32–4 data access logic, 27 data storage, 27 host-based architectures, 28 peer-to-peer (P2P) architectures, 34–5 presentation logic, 27 application layer, 10, 11, 26–59, 120–1 address, 124 desktop videoconferencing, 46 electronic mail (email), 39–44 instant messaging (IM), 45–6 Telnet, 44–5 videoconferencing, 46–8 application-level firewall, 321 application logic, 27 application management software, 357 application shaping. See policy-based management application systems, 173 asset, 302 association, 196 asymmetric DSL (ADSL), 282 asymmetric encryption, 329 asynchronous transmission, 103–4 attenuation, 96 authentication, 333. See also user authentication authentication server, 338 authoritative name server, 131 automated software delivery. See desktop management Automatic Repeat reQuest (ARQ), 99 autonomous system, 135 ISPs as, 278 auxiliary port, 139 availability, 298, 370B backbone networks (BNs), 6, 7, 222–44. See also routed backbones; switched backbones; virtual LANs (VLANs) best practice backbone design, 234–5Trimsize 8in x 10in Fitzergald bindex.tex V1 - July 17, 2014 12:40 A.M. Page 390390 Index backbone networks (BNs) (continued) performance improvement, 236–7 circuit capacity, 236 device performance, 236 reducing network demand, 236–7 backup controls, 315 bandwidth, 79 bandwidth limiter. See capacity management baseline, 172 baselining, 173 baud rate, 79 beacon frame, 196 biometrics, 337 bipolar signaling, 75 bit rate, 79 bits per second (bps), 79 Border Gateway Protocol (BGP), 135–7 border router, 136 bottleneck, 176, 208 bring your own device (BYOD), 16 broadband technologies, 281 broadcast message, 126, 132, 137 browser-based technologies, 17 brute-force attacks, 330 building-block network design process, 169–71 cost assessment, 170 needs analysis, 170, 171–4 technology design, 170 burst error, 95 bus topology, 191 business continuity, ensuring, 298, 308–18. See also Denial-of-Service (DoS) protection; disaster protection device failure protection, 313–14 fault-tolerant servers, 314 redundant array of independent disks (RAID), 314 theft protection, 313 virus protection, 309–10 byte, 72C cable modem, 283–4 architecture, 283 optical-electrical (OE) converter, 283 types of, 284 cable modem termination system (CMTS), 283 cable plan, 202 cables, 5 cabling, 202 campus backbone network, 167 Canadian Radio-Television and Telecommunications Commission (CRTC), 246 capacity management, 360 capacity planning, 175 career opportunities in communications, 3 Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA), 196 Carrier Sense Multiple Access with Collision Detection (CSMA/CD), 194, 196 carrier wave, 77 Cat 5 cable, 88–90 central authentication, 337–8 central distribution facility (CDF), 223centralized routing, 134 certificate authority (CA), 334 certificate, 338 channel service unit (CSU), 246 channel, 187, 203 character, 72 charge-back policies, 378 chassis switch, 224 checksum, 99 Ciphertext, 329 circuits, 5, 62–6 capacity of a circuit, 79 bandwidth, 79 data rate (or bit rate), 79 capacity, improving, 210–11 configuration, 62–6 data flow, 63–4 dedicated circuits, 62 designing, 175–7 bottleneck, 176 capacity planning, 175 circuit loading, 175 turnpike effect, 176 full-duplex transmission, 63 half-duplex transmission, 63 loading, 175 multiplexing, 64–7. See also individual entry multipoint circuit, 62 point-to-point circuit, 62 simplex transmission, 63 Cisco IOS, 139 classless addressing, 127 clear to send (CTS), 197 client, 5 client-based architectures, 27–9 client computer, 185 client protection, 325–9 client-server architectures, 27, 29–32 N-tier architecture, 30 thin clients versus thick clients, 31 three-tier architecture, 30 two-tier architecture, 30 cloud-based architecture, 27, 32 cloud computing architectures, 32–4 cloud computing deployment models, 27–8 cloud providers, 28 community cloud, 27 hybrid cloud strategy, 28 private cloud, 27 public cloud, 27 pure strategy, 28 cloud computing, 35 cloud Email, 38 cloud-hosted virtual desktops, 48 cloud providers, 28 coaxial cable, 67 code/coding scheme, 72 codec, 61 coding, 72–3 byte, 72 character, 72 code, 72Trimsize 8in x 10in Fitzergald bindex.tex V1 - July 17, 2014 12:40 A.M. Page 391Index 391 collision detection (CD), 194 collision domain, 192 collision, 194 committed information rate (CIR), 253 common carrier, 168, 245 Common Object Request Broker Architecture (CORBA), 30 communication media, 66–72 coaxial cable, 67 fiber-optic cable, 67–8 guided media, 66 media selection factors, 71–2 cost, 71 error rates, 72 security, 72 transmission distance, 71 transmission speeds, 72 type of network, 71 microwave, 69–70 radio, 69 satellite transmission, 70–1 twisted pair cable, 66–7 wireless media, 66 community cloud, 27 Computer Emergency Response Team (CERT), 297 computer forensics, 342 confidentiality, integrity, and availability (CIA), 298 configuration management, 363–6 desktop management, 364 documenting the configuration, 364–6 software documentation, 365 user and application profiles, 365 of network and client computers, 363–4 connectionless messaging, 122, 123 connection-oriented messaging, 122 four-way handshake, 122 three-way handshake, 122 console port, 139 content caching, 361 content delivery provider, 362 content delivery, 361 contention, 93 continuous ARQ, 100 continuous data protection (CDP), 316 control field, 104 Control Objectives for Information and Related Technology (COBIT), 301 controlled access, 93–4 controls, network, 300–1 corrective controls, 300 detective controls, 300 preventive controls, 300 convergence at home, 22–3 core layer, 167 corrective controls, 300 corrupted data, 95 cost assessment, network design, 170, 178–80 cost management, 375–80 reducing costs, 378–80 automation, 379 by developing standards, 378 moving to thin-client or cloud-based architectures, 380reducing the time spent installing new hardware and software, 379 sources of costs, 375–8 crackers, 318 cross-talk, 96 cryptography, 329 customer premises equipment (CPE), 281 cut-through switching, 193 cycles per second, 76 cyclic redundancy check (CRC), 99D data access logic, 27 data center, 167 designing, 204–6 data communications, 1–25 career opportunities, 3 future trends, 16–18 basic concepts, 1–25 history of, 1–25 1800s, 2 1900s, 2 first Industrial Revolution, 2 second Industrial Revolution, 2 data compression, 80 Data Encryption Standard (DES), 331 triple DES (3DES), 331 data flow, in circuits, 63–4 data link layer, 8, 10, 12, 92–115. See also error control; media access control (MAC) address, 124 logical link control (LLC) sublayer, 92 media access control (MAC) sublayer, 93 data link protocols, 93, 103–7 asynchronous transmission, 103–4 ethernet, 105 Link Access Protocol–Balanced (LAP-B), 105 point-to-point protocol (PPP), 106 synchronous data link control (SDLC), 104–5 synchronous transmission, 104–7 Data over Cable Service Interface Specification (DOCSIS), 283 data rate (or bit rate), 79 data service unit (DSU), 246 data storage, 27 DDoS agent, 310 DDoS handler, 310 de facto standard, 13 de jure standardization process, 13 acceptance stage, 14 identification of choices stage, 14 specification stage, 14 decryption, 329 dedicated-circuit networks, 62, 246–51 basic architecture, 246–9 dedicated-circuit services, 249 synchronous optical network (SONET) services, 249 T carrier services, 249–51 full-mesh architecture, 248 mesh architecture, 248 partial-mesh architecture, 248 ring architecture, 247Trimsize 8in x 10in Fitzergald bindex.tex V1 - July 17, 2014 12:40 A.M. Page 392392 Index dedicated-circuit networks (continued) star architecture, 248 synchronous optical network (SONET), 251 deliverables, 178 demilitarized zone (DMZ), 322 Denial-of-Service (DoS) protection, 310–13 DDoS agent, 310 DDoS handler, 310 distributed denial-of-service (DDoS) attack, 310 DoS attack, 310 traffic analysis, 311 traffic anomaly analyzer, 312 traffic anomaly detector, 311 traffic filtering, 310 traffic limiting, 310 designated router, 136 desktop management, 343, 364 desktop videoconferencing, 46 destination port address, 120 detective controls, 300 device failure protection, 313–14 device management software, 356 digital data, 61 digital signatures, 333 digital subscriber line (DSL), 65, 281–3 architecture, 281 DSL access multiplexer (DSLAM), 282 DSL modem, 281 local loop, 281 main distribution facility (MDF), 281 types of, 282 asymmetric DSL (ADSL), 282 digital transmission, 74–5 of analog data, 80–84 instant messenger transmitting voice data, 83 telephones transmitting voice data, 81–3 translating from analog to digital, 80–81 Voice over Internet Protocol (VoIP), 83–4 of digital data, 72–6 bipolar signaling, 75 coding, 72–3 digital transmission, 74–5 ethernet, 75–6 ISO 8859, 72 transmission modes, 73–4 unicode, 72 directional antenna, 189 directory services. See central authentication disaster protection, 314–18. See also intrusion prevention avoiding disaster, 314 backup, 315 continuous data protection (CDP), 316 disaster recovery, 315 online backup services, 317 recovery controls, 315 disaster recovery drill, 316 disaster recovery firm, 318 disaster recovery outsourcing, 318 disaster recovery plan, 315 discard eligible (DE), 253 disk mirroring, 314 distance vector dynamic routing, 134distortion, 96 Distributed Computing Environment (DCE), 30 distributed computing model, 32 distributed coordination function (DCF), 196 distributed denial-of-service (DDoS) attack, 310 distribution hub, 283 distribution layer, 167 distribution list, 39 domain controllers, 190 Domain Name Service (DNS), 130 working of, 131 domain names, 125 double current signaling, 75 downtime, 370 DSL access multiplexer (DSLAM), 282 dual-band access point, 198 dynamic addressing, 129 Dynamic Host Configuration Protocol (DHCP), 129 dynamic routing, 134 distance vector dynamic routing, 134 hops, 134 link state dynamic routing, 134E eavesdropping, 324 echoes, 96 e-commerce edge, 168 designing, 206–7 efficiency, 98 EIA/TIA 568-B, 65 electronic mail (email), 39–44 mail transfer agent, 40 mail user agent, 40 Multipurpose Internet Mail Extension (MIME), 43–4 three-tier thin client-server architecture, 41–3 two-tier email architecture, 40 Web-based email, 41 working of, 40–43 electronic software delivery. See desktop management Encapsulating Security Payload (ESP) packet, 260 encapsulation, 12 encryption, 329–35 asymmetric encryption, 329 authentication, 333 Data Encryption Standard (DES), 331 encryption software, 334 IP Security Protocol (IPSec), 334 IPSec transport mode, 334 IPSec tunnel mode, 335 Pretty Good Privacy (PGP), 334 public key encryption, 331 single-key encryption, 330 symmetric encryption, 329 triple DES (3DES), 331 end user support, 373–5 end user training, 375 resolving problems, 373–5 Enhanced Interior Gateway Routing Protocol (EIGRP), 136 enterprise campuses, 167 enterprise edge, 167Trimsize 8in x 10in Fitzergald bindex.tex V1 - July 17, 2014 12:40 A.M. Page 393Index 393 enterprise management software. See system management software entrapment techniques, 342 error control, 95–103 corrupted data, 95 human errors, 95 lost data, 95 network errors, 95 in practice, 102–3 error correction via retransmission, 99–102 acknowledgment (ACK), 100 continuous ARQ, 100 flow control, 100 forward error correction, 102 forward error correction, working, 101 Hamming code, 101 negative acknowledgment (NAK), 100 sliding window, 100 stop-and-wait ARQ, 99–100 error detection, 98–9 checksum, 99 cyclic redundancy check (CRC), 99 even parity, 98 odd parity, 98 parity bit, 98 parity check, 98 error prevention, 97–8 amplifiers, 97 moving cables, 97 repeaters, 97 shielding, 97 error rates, 95 errors sources of, 96–7 attenuation, 96 cross-talk, 96 distortion, 96 echoes, 96 impulse noise, 96 intermodulation noise, 97 line noise, 96 white noise or Gaussian noise, 96 Ethernet (IEEE 802.3), 105, 191–6. See also wired Ethernet; wireless Ethernet tracing Ethernet, 217–18 types of, 195 10/100/1000 Ethernet, 195 1000Base-T, 195 100Base-T, 195 10Base-T, 195 Ethernet services, 254–5 ethernet, 75–6 even parity, 98 exterior routing protocols, 135 extranet VPN, 258 extranets, 7F failure control function, 368–70 help desk, 368 problem statistics, 370problem tracking, 369 trouble tickets, 369 fault management, 366–73 failure control function, 368–70 failure statistics, 370–72 availability, 370 downtime, 370 mean time between failures (MTBF), 371 mean time to diagnose (MTTD), 371 mean time to fix (MTTF), 371 mean time to repair (MTTR), 371 mean time to respond (MTTR), 371 quality control charts, 372 fault-tolerant server financial impact, 314 Federal Communications Commission (FCC), 246 fiber node, 283 fiber-optic cable, 67–8, 186 multimode, 68 single-mode, 68 fiber to the home (FTTH), 285 file server, 5 firefighting, 353 firewalls, 319–25 application-level firewall, 321 architecture, 322–3 eavesdropping, 324 network address translation (NAT) firewalls, 322 packet-level firewalls, 320 physical security, 323–5 to protect networks, 320 secure switch, 325 sniffer program, 325 flag, 104 flow control, 100 forward error correction, 102 working of, 101 Forwarding Equivalence Classes (FEC), 235 forwarding table, 192 four-way handshake, 122 fractional T1 (FT1), 251 fragment-free switching, 194 frame, 103, 104 frame check sequence field, 105 frame relay services, 253–4 frames, 191 frequency, 76 frequency division multiplexing (FDM), 64 frequency modulation (FM), 77–8 full-duplex transmission, 63 full-mesh architecture, 248G gateway, 140 Gaussian noise, 96 geographic scope, 172 geosynchronous satellite transmission, 70 gigapops, 288 Go-Back-N ARQ, 100 guided media, 66Trimsize 8in x 10in Fitzergald bindex.tex V1 - July 17, 2014 12:40 A.M. Page 394394 IndexH H.320 standard, 48 H.323 standard, 48 hackers, 318 hacking, 297 hacktivism, 297 half-duplex transmission, 63 Hamming code, 101 Hardware as a Service (HaaS), 34 hardware layers, 10 headend, 283 Health Insurance Portability and Accountability Act (HIPAA), 297 help desk, 368 hidden node problem, 197 hierarchical backbones. See routed backbones high-level data link control (HDLC), 105 honey pot, 342 hops, 134 host-based architectures, 27, 28 problems with, 28 host-based IPS, 339 hub-based Ethernet, 191 hub polling, 94 hubs, 187–9 human errors, 95 hybrid cloud strategy, 28 hybrid fiber coax (HFC) networks, 283 Hypertext Transfer Protocol (HTTP), 11, 36 HTTP request, 36 HTTP response, 37 inside HTTP request, 37–8 request body, 37 request header, 37 request line, 37 inside HTTP response, 38–9 response body, 38 response header, 38 response status, 38 HypertextMarkup Language (HTML), 39I IEEE 802.11, 196. See also wireless Ethernet IEEE 802.1q standard, 231 IEEE 802.3. See Ethernet (IEEE 802.3) IEEE 802.3ac, 105 impact score, 304 impulse noise, 96 information bits, 107 information frame, 104 information warfare program, 318 Infrastructure as a Service (IaaS), 34 instant messaging (IM), 45–46 instant messenger transmitting voice data, 83 Institute of Electrical and Electronics Engineers (IEEE), 14 integrity, 298 interexchange carrier (IXC), 95, 246 interface, 132, 139 Interior Gateway Routing Protocol (IGRP), 136 interior routing protocols, 135Intermediate System to Intermediate System (IS-IS), 136 intermodulation noise, 97 International Organization for Standardization (ISO), 14 International Telecommunications Union—Telecommunications Group (ITU-T), 14 Internet, 26, 276–95 basic architecture, 277–8 future of, 286–8 building the future, 287–8 gigapops, 288 Internet Engineering Task Force (IETF), 286 Internet governance, 286–7 Internet2, 288 Next Generation Internet (NGI), 288 Internet today, 280–1 speed test, 284 working of, 277–81 Internet access component, 168 Internet access technologies, 281–6 broadband technologies, 281 cable modem, 283–4 Digital Subscriber Line (DSL), 281–3. See also individual entry Fiber to the home (FTTH), 285 WiMax (Worldwide Interoperability for Microwave Access), 285–6 Internet address classes, 126 Internet addresses, 125 Internet Architecture Board (IAB), 287 Internet Control Message Protocol (ICMP), 135 Internet Corporation for Assigned Names and Numbers (ICANN), 125, 287 Internet domain names, 4 Internet Engineering Steering Group (IESG), 286 Internet Engineering Task Force (IETF), 14, 15, 286 Internet exchange points (IXPs), 277 Internet Group Management Protocol (IGMP), 138 Internet Key Exchange (IKE), 334 Internet Message Access Protocol (IMAP), 40 Internet model, 7, 9–10 application layer, 10 data link layer, 10 groups of layers, 10 hardware layers, 10 internetwork layer, 10 network layer, 10 physical layer, 9 transport layer, 10 Internet Protocol (IP), 12, 119–20 Internet Research Task Force (IRTF), 287 Internet Service Provider (ISP), 1, 257, 277 autonomous system, 278 connecting to, 279–80 local ISPs, 277 national ISPs, 277 regional ISPs, 277 Internet Society, 286 Internet2®, 288 internetwork layer, 10 Internetwork Operating Systems (IOS), 139 intranet VPN, 258 intranets, 7Trimsize 8in x 10in Fitzergald bindex.tex V1 - July 17, 2014 12:40 A.M. Page 395Index 395 intrusion, 299 intrusion prevention, 318–42. See also encryption; firewalls adware, 329 client protection, 325–9 crackers, 318 operating systems, 326–7 perimeter security, 319–25 preventing social engineering, 338–9 proactive principle in, 319 security holes, 325 security policy, 319 server protection, 325–9 spyware, 329 Trojan horse, 327 types of intruders, 318 casual intruders, 318 hackers, 318 organization employees, 319 professional hackers, 318 intrusion prevention systems (IPS), 339–41 anomaly detection, 340 host-based IPS, 339 misuse detection, 339 network-based IPS, 339 intrusion recovery, 341–2 computer forensics, 342 entrapment techniques, 342 honey pot, 342 inventory IT assets, 302–4 mission-critical application, 302 IP Security Protocol (IPSec), 334 IPSec transport mode, 334 IPSec tunnel mode, 335 IP services, 256 IP spoofing, 321 IPS management console, 339 IPS sensor, 339 IPSec transport mode, 334 IPSec tunnel mode, 335 IPv4 private address space, 126–7 ISO 8859, 72K Kerberos, 338 key, 330 key management, 330 Kilo Hertz (KHz), 74L L2TP, 258 Label Switched Routers (LSRs), 235 latency, 193, 262 layer-2 switch, 193, 222, 225 Layer-2 tunneling protocol (L2TP), 258 layer-2 VPN, 258 layer-3 VPN, 258 layers, 7 Lempel–Ziv encoding, 80 lightweight directory access protocol (LDAP), 190 line noise, 96line splitter, 281 Link Access Protocol for Modems (LAP-M), 100 Link Access Protocol–Balanced (LAP-B), 105 link state dynamic routing, 134 load balancer, 205, 359 Local Area Networks (LANs), 5–6, 167, 184–221. See also wired Ethernet; wired LANs; wireless LANs (WLANs) best practice LAN design, 201–8 designing data center, 204–6 designing user access with wired Ethernet, 202 e-commerce edge, designing, 206–7 network-attached storage (NAS), 206 SOHO environment designing, 207–8 storage area network (SAN), 206 components, 185–91 access points, 187–9 client computer, 185 directional antenna, 189 hubs, 187–9 network circuits, 186–7 network interface card (NIC), 186 network operating system(NOS), 190–1 network profile, 191 omnidirectional antennas, 189 port, 187 power over Ethernet (POE), 188 server, 185 switches, 187–9 twisted-pair cable, 188 user profile, 191 wireless access point, 189 performance improvement, 208–11 circuit capacity, 210–11 hardware, 210 network demand, reducing, 211 performance checklist, 209 redundant array of inexpensive disks (RAID), 210 server performance, 209–10 software, 209 symmetric multiprocessing (SMP), 210 local exchange carrier (LEC), 246 local loop, 82, 281 logical circuit, 61 logical link control (LLC) sublayer, 92 logical network design, 172 logical topology, 191 loopback, 126 lost data, 95M MAC address filtering, 200 macro viruses, 309 mail transfer agent, 40 mail user agent, 40 main distribution facility (MDF), 223, 225, 281 managed device, 355 managed networks, 355–8 alarm message, 355 application management software, 357 device management software, 356Trimsize 8in x 10in Fitzergald bindex.tex V1 - July 17, 2014 12:40 A.M. Page 396396 Index managed networks (continued) network management software, 355 system management software, 356 management information base (MIB), 357 Manchester encoding, 76 massively online, 17–18 maximum allowable rate (MAR), 253 mean time between failures (MTBF), 371 mean time to diagnose (MTTD), 371 mean time to fix (MTTF), 371 mean time to repair (MTTR), 371 mean time to respond (MTTR), 371 media access control (MAC), 93–5 access request technique, 93 contention, 93 controlled access, 93–4 polling, 94 relative performance, 94–5 roll-call polling, 94 sublayer, 93 wired Ethernet, 194–5 wireless Ethernet, 196–7 mesh architecture, 248 full-mesh architecture, 248 partial-mesh architecture, 248 message transmission using layers, 10–13, 117 microwave transmission, 69–70 middleware, 30 mission-critical application, 302 misuse detection, 339 mobile wireless, 286 modem, 61 modems transmitting data, 80 data compression, 80 Lempel–Ziv encoding, 80 modulation, 77–9 amplitude modulation (AM), 77 basic modulation, 77 baud rate, 79 bit rate, 79 frequency modulation (FM), 77–8 phase modulation (PM), 77–8 quadrature amplitude modulation (QAM), 79 sending multiple bits simultaneously, 78 symbol rate, 79 two-bit amplitude modulation, 78 modules, 224 monitor, 366 moving cables, 97 MPEG-2 standard, 48 multicast message, 137 multicasting, 126, 137–8 multimode fiber-optic cables, 68 multiplexing, 64–7 frequency division multiplexing (FDM), 64 statistical time division multiplexing (STDM), 64 time division multiplexing (TDM), 64 wavelength division multiplexing (WDM), 64 multipoint circuit, 62 multiprotocol label switching (MPLS), 235, 255–6 Multipurpose Internet Mail Extension (MIME), 43–4 attachments in, 43–4multiswitch VLAN, 229 multitenancy, 33N name servers, 130 national ISPs, 277 native apps, 17 needs analysis, 170, 171–4 application systems, 173 baseline, 172 deliverables, 174 geographic scope, 172 logical network design, 172 network architecture component, 172 network needs, categorizing, 173–4 desirable requirements, 173 mandatory requirements, 173 wish-list requirements, 173 network users, 173 negative acknowledgment (NAK), 100 network address translation (NAT) firewalls, 322 network and transport layers, 116–65. See also addressing; routing message transmission using layers, 117 protocols, 118–20 Internet Protocol (IP), 119–20 Transmission Control Protocol (TCP), 118–19 network architecture components, 166–8 access layer, 167 building backbone network, 167 campus backbone, 167 distribution layer, 167 enterprise campuses, 167 network-attached storage (NAS), 206 network authentication. See central authentication network-based IPS, 339 network circuits, 186–7 network cost of ownership (NCO), 376 network design, 166–83. See also technology design building-block network design process, 169–71 cost assessment, 178–80 request for proposal (RFP), 178–9 selling proposal to management, 179–80 SmartDraw software, 183 tools, 177–8 traditional network design process, 168–9 network documentation, 364 network errors, 95 corrupted data, 95 lost data, 95 network interface card (NIC), 186 network interface port, 139 network layer address, 124 network management, 353–87. See also configuration management; cost management; end user support; fault management; managed networks; performance management management tasks, 354 controlling activities, 354 directing activities, 354Trimsize 8in x 10in Fitzergald bindex.tex V1 - July 17, 2014 12:40 A.M. Page 397Index 397 organizing activities, 354 planning activities, 354 staffing activities, 354 network managers job requirements, 374 role, 354 network traffic, managing, 359–60 load balancing, 359 policy-based management, 359 software, 366 standards, 357 network mapping, 240–2 network models, 7–13 application layer, 11 data link layer, 12 layers, 7 pros and cons of using, 12–13 network layer, 12 physical layer, 12 transport layer, 12 network monitoring, 366 network operating system (NOS), 190–1 NOS Client Software, 190 NOS Server Software, 190 network operations center (NOC), 366 network performance, designing for, 355–63 managed networks, 355–8 network profile, 191 network security, 296–352. See also controls, network; intrusion prevention; risk assessment; security threats basic control principles of, 303 need for, 298 physical security, 313 reasons for, 297 hacking, 297 hacktivism, 297 mobile devices exploitation, 297 network segmentation, 211 network server, 190 network standards. See standards, network network traffic reducing, 360–3 capacity management, 360 content caching, 361 content delivery, 361 network users, 173 networks, data communications, 4–7 components of, 4–6 cables, 5 circuit, 5 client, 5 file server, 5 peer-to-peer networks, 5 print server, 5 router, 5 server, 4 switch, 5 Web server, 5 types of, 6–7 backbone networks (BNs), 6 local area networks (LANs), 6 wide area networks (WANs), 6Next Generation Internet (NGI), 288 N-tier architecture, 30O odd parity, 98 omnidirectional antennas, 189 one-time passwords, 337 online backup services, 317 Open Database Connectivity (ODBC), 30 Open Shortest Path First (OSPF), 136 Open Systems Interconnection Reference (OSI) model, 7, 8–9 application layer, 9 data link layer, 8 network layer, 9 physical layer, 8 presentation layer, 9 session layer, 9 transport layer, 9 operating systems, 326–7 Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE), 301 optical-electrical (OE) converter, 283 optical network unit (ONU), 285 overhead bits, 107 overlay networks, 201 oversampling, 81P packet assembly/disassembly device (PAD), 252 packet-level firewalls, 320–1 packet-switched networks, 251–6 basic architecture, 252–3 packet-switched services, 253 Ethernet services, 254–5 frame relay services, 253–4 IP services, 256 multiprotocol label switching (MPLS) services, 255–6 parallel transmission, 73 parity bit, 98 parity checking, 98 partial-mesh architecture, 248 passive scanning, 196 passphrases, 336–7 passwords, 337 cracking, 338 one-time passwords, 337 selecting, 336 patch, 326 patch cables, 223 peering, 278 Peer-to-peer (P2P) architectures, 27, 34–5 peer-to-peer networks, 5 performance and failure statistics, 370–2 performance management, 366–73 perimeter security, 319–25 permanent virtual circuits (PVCs), 253 phase modulation (PM), 77–8 phase shift keying (PSK), 77Trimsize 8in x 10in Fitzergald bindex.tex V1 - July 17, 2014 12:40 A.M. Page 398398 Index phase, 77 phishing, 338 physical carrier sense method. See distributed coordination function (DCF) physical circuit, 61 physical layer, 8, 9, 12, 60–91. See also analog transmission: of digital data; circuits; communication media; digital transmission: of digital data physical network design, 175 physical security, 313, 323–5 physical topology, 191 plain old telephone service (POTS), 76 Plaintext, 329 Platform as a Service (PaaS), 34 podcasting, 146 point coordination function (PCF), 197 point management software. See device management software point of presence (POP), 253, 279 point-to-point circuit, 62 Point-to-Point Protocol (PPP), 106 polarity, 73 policy-based management, 359 polling, 94 hub polling, 94 roll-call polling, 94 port, 187 port address, 120 destination port address, 120 source port address, 120 Post Office Protocol (POP), 40 power over Ethernet (POE), 188 presentation logic, 27 Pretty Good Privacy (PGP), 334 preventive controls, 300 print server, 5 private cloud, 27 private IPv4 address space, 126–7 private key, 331 private line services, 246 probe frame, 196 problem statistics, 370 management reports, 370 problem prioritizing, 370 problem tracking, 369 propagation delay, 71 protocol, 10, 36, 93 Protocol Data Units (PDUs), 11, 23, 117 seeing PDUs in messages, 23–5 protocol data, 26 protocol stack, 13 public cloud, 27 public key, 331 public key encryption, 331 secure transmission with, 332 public key infrastructure (PKI) 331 public utilities commission (PUC), 246 pulse amplitude modulation (PAM), 81 pulse code modulation (PCM), 82 pure strategy, 28 PuTTY software package, 45Q quadrature amplitude modulation (QAM), 79 quality control charts, 372 Quality of Service (QoS), 123 quantizing error, 80R rack, 223 rack-mounted switched backbone network architecture, 224 radio transmission, 69 raindrop attenuation, 71 real TCO, 376 Real-Time Streaming Protocol (RTSP), 123 Real-Time Transport Protocol (RTP), 123 reclocking time, 63 recovery controls, 315 redundancy, 313–14 redundant array of independent disks (RAID), 210, 314 regional ISPs, 277 remote monitoring (RMON), 357 repeaters, 97 replication, 130 request body, 37 request for comments (RFCs), 15, 286 request for proposal (RFP), 178–9 request header, 37 request line, 37 request to send (RTS), 197 reserved addresses, 126 resolving name server, 131 Resource Reservation Protocol (RSVP), 123 response body, 38 response header, 38 response status, 38 retrain time, 63 ring architecture, 247 risk assessment, 301–8. See also intrusion prevention document existing controls, 307–8 frameworks, 301 inventory IT assets, 302–4 risk control strategy, 307 risk measurement criteria, developing, 301–2 financial, 301–2 legal, 301–2 productivity, 301–2 reputation, 301–2 risk mitigation, 307 risk score, 305 threats identification, 304–7 roll-call polling, 94 root cause analysis, 357 root servers, 131 rootkits, 327 routed backbones, 226–9 architecture, 227 router(s), 5, 132, 223 routing, 132–40 Access Control List (ACL), 140 anatomy of a router, 138–40 border router, 136Trimsize 8in x 10in Fitzergald bindex.tex V1 - July 17, 2014 12:40 A.M. Page 399Index 399 centralized routing, 134 designated router, 136 dynamic routing, 134 Internet Group Management Protocol (IGMP), 138 multicasting, 137–8 network manager connect to, 139 auxiliary port, 139 console port, 139 network interface port, 139 routing protocols, 135–7 Border Gateway Protocol (BGP), 135–7 Enhanced Interior Gateway Routing Protocol (EIGRP), 136 exterior routing protocols, 135 Interior Gateway Routing Protocol (IGRP), 136 interior routing protocols, 135 Intermediate System to Intermediate System (IS-IS), 136 Internet Control Message Protocol (ICMP), 135 Open Shortest Path First (OSPF), 136 Routing Information Protocol (RIP), 136 static routing, 134 types of, 134–5 Routing Information Protocol (RIP), 136S Sarbanes-Oxley Act (SOX), 297 satellite transmission, 70–1 geosynchronous, 70 scalability, 35 scanning, 196 Secure Sockets Layer (SSL), 334 secure switch, 325 security holes, 325 security policy, 319 security threats, 298–300 business continuity, 298 confidentiality, integrity, and availability (CIA), 298 disruptions, 298 types of, 298–300 unauthorized access, 299 segment, 117 segmenting, 121–2 serial transmission, 74 server, 4, 185 server farm, 33 server farms or clusters, 359 server name resolution, 130 server performance, 209–10 server protection, 325–9 server virtualization, 205, 359 service-level agreements (SLAs), 373 session management, 122–4 connectionless messaging, 122 connection-oriented messaging, 122 Quality of Service (QoS), 123 shared circuit, 62 shielded twisted-pair (STP) cable, 186 shielding, 97 Simple Mail Transfer Protocol (SMTP), 40 inside SMTP packet, 43SMTP transmission, 42–3 Simple Network Management Protocol (SNMP), 357 simplex transmission, 63 simulation, 177 single-key encryption, 330 algorithm, 330 brute-force attacks, 330 key management, 330 key, 330 single-mode fiber-optic cables, 68 single sign-on. See central authentication single-switch VLAN, 229 site survey, 202 sliding window, 100 small-office, home-office (SOHO), 187 SOHO environment designing, 207–8 smart card, 337 SmartDraw software, 183 sniffer program, 325 social engineering, 338–9 phishing, 338 preventing, 338–9 Software as a Service (SaaS), 33 Solarwinds Network, 383–6 something you are, 337 something you have, 337 something you know, 337 source port address, 120 spikes, 96 spyware, 329 standards, network, 13–16 importance of, 13 standards-making process, 13–16 American National Standards Institute (ANSI), 14 common standards, 16 de facto standards, 13 de jure standard, 13 acceptance stage, 14 identification of choices stage, 14 specification stage, 14 Institute of Electrical and Electronics Engineers (IEEE), 14 International Organization for Standardization (ISO), 14 International Telecommunications Union—Telecommunications Group (ITU-T), 14 Internet Engineering Task Force (IETF), 14, 15 network protocols becoming standards, 15 star architecture, 248 start bit, 103 static routing, 134 statistical time division multiplexing (STDM), 64 stop-and-wait ARQ, 99–100 stop bit, 103 storage area network (SAN), 33, 206 store and forward switching, 194 structured cabling EIA/TIA 568-B, 65 subnet mask, 128–9 subnets, 127–8 subnetted backbones. See routed backbones supervisory frame, 105 switch, 5, 82 switch-based Ethernet, 192 cut-through switching, 193Trimsize 8in x 10in Fitzergald bindex.tex V1 - July 17, 2014 12:40 A.M. Page 400400 Index switch-based Ethernet (continued) fragment-free switching, 194 layer-2 switch, 193 store and forward switching, 194 switched backbones, 223–6 chassis switch, 224 layer-2 switches, 222, 225 modules, 224 rack-mounted, 224 switched Ethernet networks, 201 switched virtual circuits (SVCs), 253 switches, 187–9, 222 VLAN switches, 223 switching, 187 symbol rate, 62, 74, 79 symmetric encryption, 329 symmetric multiprocessing (SMP), 210 synchronization, 103 synchronous data link control (SDLC), 104–5 synchronous digital hierarchy (SDH), 251 synchronous optical network (SONET) services, 249, 251 synchronous transmission, 104–7 system management software, 356T T carrier services, 249–51 fractional T1 (FT1), 251 T1 circuit, 249 T3 circuit, 250 technology design, 170, 175–8 circuits, designing, 175–7 clients and servers, designing, 175 deliverables, 178 network design tools, 177–8 telephones transmitting voice data, 81–3 Telnet, 44–5 theft protection, 313 thick-client (fat-client) approach, 31 thin-client approach, 31 threat scenarios, 304 threats identification, 304–7 three-tier architecture, 30 three-tier thin client-server architecture, 41–3 three-way handshake, 122 throughput, 108 tier 1 ISPs, 277 tier 2 ISPs, 277 tier 3 ISPs, 277 time-based tokens, 337 time division multiplexing (TDM), 64 token, 337 token passing, 94 Top Level Domain (TLD), 131 topology, 191 total cost of ownership (TCO), 376 real TCO, 376 traditional network design process, 168–9 traffic analysis, 311 traffic anomaly analyzer, 312 traffic anomaly detector, 311 traffic filtering, 310traffic limiting, 310 traffic shaping. See policy-based management Transmission Control Protocol (TCP), 12 Transmission Control Protocol/Internet Protocol (TCP/IP), 116, 118–20 and network layers, 145–7 ARP command, 154 DNS cache, 155 example, 140–7 known addresses, different subnet, 143 known addresses, same subnet, 140–3 TCP connections, 144–5 unknown addresses, 144 IPCONFIG command, 152 NSLOOKUP command, 154 PING command, 153 TRACERT command, 156 transmission efficiency, 107–9 information bits, 107 overhead bits, 107 throughput, 108 transmission modes, 73–4 parallel transmission, 73 serial transmission, 74 transmission rate of information bits (TRIB), 109 transport layer functions, 120–4 linking to application layer, 120–1 segmenting, 121–2 session management, 122–4 transport layer, 9, 10, 12 triple DES (3DES), 331 Trojan horse, 327 trouble tickets, 369 tunnels, 257 turnaround time, 63 turnpike effect, 176 twisted pair cable, 66–7, 188 two-bit amplitude modulation, 78 two-tier architecture, 30, 32 email architecture, 40U undersea fiber-optic cables, 66 unicast message, 137 unicode, 72 uniform resource locator (URL), 36 uninterruptable power supply (UPS), 314 unipolar signaling, 74 United States of America Standard Code for Information Interchange (USASCII), 72 unshielded twisted-pair (UTP) cable, 186 uptime, 372 user authentication, 335–8 access cards, 337 authentication server, 338 biometrics, 337 central authentication, 337–8 certificate, 338 Kerberos, 338 one-time passwords, 337 passwords, 337Trimsize 8in x 10in Fitzergald bindex.tex V1 - July 17, 2014 12:40 A.M. Page 401Index 401 smart card, 337 time-based tokens, 337 token, 337 User Datagram Protocol (UDP), 118 user profile, 191, 338V V.44, 80 videoconferencing, 46–8 desktop videoconferencing, 46 H.320, 48 H.323, 48 MPEG-2, 48 Webcasting, 48 virtual carrier sense method. See point coordination function (PCF) virtual LANs (VLANs), 229–34 benefits of, 229–31 multiswitch VLAN, 229, 231 single-switch VLAN, 229 VLAN ID number, 231 VLAN tag, 233 VLAN trunks, 233 VLAN-based backbone network architecture, 230 working of, 231 virtual private networks (VPNs), 257–61 access VPN, 258 basic architecture, 257–8 extranet VPN, 258 Internet Service Provider (ISP), 257 intranet VPN, 258 layer-2 VPN, 258 tunnels, 257 types, 258 VPN gateway, 257 VPN software, 258 working, 258–61 virtual server, 359 virus protection, 309–10 antivirus software, 309 macro viruses, 309 worm, 309 VLAN ID number, 231 VLAN tag, 233 VLAN trunks, 233 Voice over Internet Protocol (VoIP), 17, 83 VPN gateway, 257 VPN software, 258W warchalking, 199 wardriving, 199, 218–20 warwalking, 218–20 wavelength division multiplexing (WDM), 64 Web browser, 36Web of things, 17 Web server, 5, 36 Web-based email, 41–2 Webcasting, 48 white noise, 96 wide area networks (WANs), 6, 7, 166, 168, 245–75. See also dedicated-circuit networks; packet-switched networks; virtual private networks (VPNs) best practice WAN design, 261–2 performance improvement, 262–4 circuit capacity, 263 device performance, 262–3 reducing network demand, 263–4 Wi-Fi, 196 Wi-Fi Protected Access (WPA), 200 WiGig, 199 WiMax (Worldwide Interoperability for Microwave Access), 285–6 Wired Equivalent Privacy (WEP), 199 wired Ethernet, 191–6. See also wireless Ethernet designing user access with, 202 error control in, 208 hub-based Ethernet, 191 media access control, 194–5 switch-based Ethernet, 192 topology, 191–4 logical topology, 191 physical topology, 191 wired LANs, 184–221 wireless access point, 189 wireless Ethernet, 196–200 associating with AP, 196 distributed coordination function (DCF), 196 frame layout, 197–8 media access control, 196–7 point coordination function (PCF), 197 security, 199–200 MAC address filtering, 200 topology, 196 types of, 198–9 802.11a, 198 802.11ac, 198 802.11ad, 199 802.11b, 198 802.11g, 198 802.11i, 200 802.11n, 198 wireless LANs (WLANs), 16–17, 184–221 Wireless media, 66 wish-list requirements, 173 World Wide Web, 36–9 Web, working, 36–7 worm, 309Z zero-day attacks, 326WILEY END USER LICENSE AGREEMENT Go to www.wiley.com/go/eula to access Wiley’s ebook EULA.