TitleHands-on tutorial on Software-defined Networking
SpeakersSrini Seetharaman (Deutsche Telekom), Mike Cohen (BigSwitch)
AbstractSoftware-defined networking (SDN) is an emerging paradigm that provides network owners and operators with more evolvable, flexible networks, in the data center, cellular network, enterprise, and home. Key attributes of SDN include: separation of data and control planes; a uniform vendor-agnostic interface between control and data planes called OpenFlow; a logically centralized control plane; and slicing and virtualization of the underlying network. With an SDN, a researcher or network administrator can introduce a new capability by writing a simple software program that manipulates the flows within a logical slice of the network.
This tutorial will describe OpenFlow/SDN, give hands-on experience with an open-source OpenFlow controller called Floodlight and associated tools, and discuss use-cases for OpenFlow/SDN in data center networks. After the tutorial, you can apply what you've learned to physical networks based on software switches, NetFPGAs, OpenWRT, or even commercial OpenFlow enabled hardware switches from a growing number of vendors in different domains of use.
- Introduction: Why, What, How
- OpenFlow Potential, Limitations, Current vendors
- Big Picture: Software-defined networking
BioSrini Seetharaman is a member of the Clean Slate Lab at Stanford University and a Senior Research Scientist with Deutsche Telekom R&D Lab, Los Altos, CA. He leads the OpenFlow deployment activities in the US as part of the GENI initiative. He is a recipient of Future Internet Design award from National Science Foundation. He holds a Ph.D. in Computer Science from the Georgia Institute of Technology and a Masters degree in Computer Science from The Ohio State University. His research interests include networking architectures and protocols, overlay networks, network monitoring and green technologies.
TitleInterconnection Networks for Cloud Data Centers
SpeakerSudipta Sengupta, Microsoft Research
AbstractLarge scale data centers are enabling the new era of Internet cloud computing. The computing platform in such data centers consists of low-cost commodity servers that, in large numbers and with software support, match the performance and reliability of expensive enterprise-class servers of yesterday, at a fraction of the cost. The network interconnect within the data center, however, has not seen the same scale of commoditization or dropping price points. Today's data centers use expensive enterprise-class networking equipment and associated best-practices that were not designed for the requirements of Internet-scale data center services -- they severely limit server-to-server network capacity, create fragmented pools of servers that do not allow any service to run on any server, and have poor reliability and utilization. The commoditization and redesign of data center networks to meet cloud computing requirements is the next frontier of innovation in the data center.
Recent research in data center networks addresses many of these aspects involving both scale and commoditization. By creating large flat Layer 2 networks, data centers can provide the view of a flat unfragmented pool of servers to hosted services. By using traffic engineering methods (based on both oblivious and adaptive routing techniques) on specialized network topologies, the data center network can handle arbitrary and rapidly changing communication patterns between servers. By making data centers modular for incremental growth, the up-front investment in infrastructure can be reduced, thus increasing their economic feasibility. This is an exciting time to work in the data center networking area, as the industry is on the cusp of big changes, driven by the need to run Internet-scale services, enabled by the availability of low-cost commodity switches/routers, and fostered by creative and novel architectural innovations.
We will begin with an introduction to data centers for Internet/cloud services. We will survey several next-generation data center network designs that meet the criteria of allowing any service to run on any server in a flat un-fragmented pool of servers and providing bandwidth guarantees for arbitrary communication patterns among servers (limited only by server line card rates). These span efforts from academia and industry research labs, including VL2, Portland, SEATTLE, Hedera, and BCube, and ongoing standardization activities like IEEE Data Center Ethernet (DCE) and IEEE TRILL. We will also cover other emerging aspects of data center networking like energy proportionality for greener data center networks.
- Introduction to Cloud Data Centers (20 min)
- Clos Networks and Traffic Oblivious Routing (35 min)
- Flat Layer 2 Network Design (45 min)
- Adaptive Routing (20 min)
- Modular Data Center Network Design (30 min)
- Energy Efficiency in Data Center Networks (30 min)
BioDr. Sudipta Sengupta is currently at Microsoft Research, where he is working on data center systems and networking, peer-to-peer applications, mobile connectivity, non-volatile memory for cloud/server applications, and data deduplication. Previously, he spent five years at Bell Laboratories, Lucent Technologies, where he advanced the state-of-the-art in Internet routing, optical switching, network security, wireless networks, and network coding.
Dr. Sengupta has taught advanced courses/tutorials on networking at many academic/research and industry conferences (please see list below). He received a Ph.D. and an M.S. in Electrical Engg. & Computer Science from Massachusetts Institute of Technology (MIT), USA, and a B.Tech. in Computer Science & Engg. from Indian Institute of Technology (IIT), Kanpur, India. He was awarded the President of India Gold Medal at IIT-Kanpur for graduating at the top of his class across all disciplines. He has published 65+ research papers in some of the top conferences, journals, and technical magazines, including ACM SIGCOMM, ACM SIGMETRICS, USENIX ATC, IEEE INFOCOM, IEEE International Conference on Network Protocols (ICNP), ACM SIGCOMM Internet Measurement Conference (IMC), International Conference on Distributed Computing Systems (ICDCS), Allerton Conference on Communication, Control, and Computing, Conference on Information Sciences and Systems (CISS), IEEE International Symposium on Information Theory (ISIT), ACM Hot Topics in Networking, International Conference on Very Large Data Bases (VLDB), ACM SIGMOD, IEEE/ACM Transactions on Networking (ToN), IEEE Journal on Selected Areas in Communications (JSAC), IEEE Transactions on Information Theory (ToIT), IEEE Communications Magazine, IEEE Network Magazine, ACM Symposium on Theory of Computing (STOC), European Symposium on Algorithms (ESA), Discrete Optimization, and Journal of Algorithms. He has authored 40+ patents (granted or pending) in the area of computer networking.
Dr. Sengupta won the IEEE Communications Society William R. Bennett Prize for 2011 and the IEEE Communications Society Leonard G. Abraham Prize for 2008 for his work on oblivious routing of Internet traffic. At Bell Labs, he received the President's Teamwork Achievement Award for technology transfer of research into Lucent products. His work on peer-to-peer based distribution of real-time layered video received the IEEE ICME 2009 Best Paper Award. At Microsoft, he received the Gold Star Award which recognizes excellence in leadership and contributions for Microsoft's long term success. Dr. Sengupta is a Senior Member of IEEE.
TitleDesigning Scientific, Enterprise, and Cloud Computing Systems with InfiniBand and High-Speed Ethernet: Current Status and Trends
SpeakerD. K. Panda (The Ohio State University)
AbstractInfiniBand (IB) and High-Speed Ethernet (HSE) interconnects are generating a lot of excitement towards building next generation scientific, enterprise and cloud computing systems. This tutorial will provide an overview of these emerging interconnects, the features they offer, their current market standing, and their suitability for cluster computing. It will start with a brief overview of IB, HSE and their architectural features. An overview of the emerging OpenFabrics stack which encapsulates both IB and Ethernet in a unified manner, and hardware technologies such as Virtual Protocol Interconnect (VPI), RDMA over Converged Enhanced Ethernet (RoCE) that aim at converged hardware solutions will be presented. IB and HSE hardware/software solutions and the market trends will be highlighted. Finally, sample performance numbers highlighting the performance these technologies can achieve in different environments such as MPI, PGAS/UPC, Parallel File Systems, Memcached and Cloud Computing (Hadoop, HDFS and HBase) will be shown.
- What are IB and HSE?
- Short Overview of InfiniBand Architecture
- Overview of High Speed Ethernet, Convergence and Features
- Overview of IB and HSE Products (hardware and software), Time-frames, and Market Trends
- Designing High-end Systems with IB and HSE: Research Challenges, Case Studies and Performance Evaluation Conclusions, Final Q&A, and Discussion
BioDhabaleswar K. (DK) Panda is a Professor of Computer Science at the Ohio State University. He obtained his Ph.D. in computer engineering from the University of Southern California. His research interests include parallel computer architecture, high performance computing, communication protocols, files systems, network-based computing, and Quality of Service. He has published over 300 papers in major journals and international conferences related to these research areas. Dr. Panda and his research group members have been doing extensive research on modern networking technologies including InfiniBand, HSE and RDMA over Converged Enhanced Ethernet (RoCE). His research group is currently collaborating with National Laboratories and leading InfiniBand and HSE companies on designing various subsystems of next generation high-end systems. The MVAPICH/MVAPICH2 (High Performance MPI over InfiniBand, iWARP and RoCE) open-source software packages, developed by his research group (http://mvapich.cse.ohio-state.edu), are currently being used by more than 1,930 organizations worldwide (in 68 countries). This software has enabled several InfiniBand clusters (including the 5th and 7th ranked ones) to get into the latest TOP500 ranking. More than 110,000 downloads of this software have taken place from the projectÕs website alone. This software package is also available with the Open Fabrics stack for network vendors (InfiniBand and iWARP), server vendors and Linux distributors. Dr. Panda's research is supported by funding from US National Science Foundation, US Department of Energy, and several industry including Intel, Cisco, SUN, Mellanox, QLogic, NVIDIA and NetApp. He is an IEEE Fellow and a member of ACM.
TitleThe Evolution of Network Architecture towards Cloud-Centric Applications.
SpeakerLoukas Paraschis, Cisco
AbstractThe increasing availability of fast and reliable network connectivity has enabled applications to transition to an Internet based service delivery model, commonly referred to as "cloud computing". The underlying infrastructure consists of data-centers of massive compute and storage resources, and networking which is crucial in interconnecting, and optimizing the cost-performance, of the "cloud infrastructure". As a result the interconnection of datacenters is one of the largest contributors to the increase of traffic demand in the traditional backbone transport networks. Consequently, the structure of the Internet has been changing towards a flatter hierarchy with denser interconnections. In this session, we explore the implication of this change of the internet for traditional transport network architectures, including optical, routing, and traffic engineering.
We first analyze the functional characteristics and challenges of these networks, and review the current and emerging applications that motivated these networks to scale leveraging IP, MPLS, and DWDM transport. We particularly discuss how the new, high-bandwidth, predominantly video related, applications (including IPTV, video-on-demand, peer-to-peer, and videoconferencing), often with diverse quality-of-service requirements, are increasingly motivating a fundamental shift in services from circuits to packets, giving rise to the most significant evolution of transport networks in recent history. The tutorial then focuses on the current and future converged Packet and DWDM transport. We identify the unique network requirements, design challenges, and desired future hardware and software features of these inter-data-center interconnection architectures, and their components. We also review the significant advancements in optical technologies, system, standards, and the corresponding improvements in capital and operational cost, including bandwidth, density, and power. We finally attempt to evaluate the interplay among the intra and inter data-center networking architectures, system design, and enabling photonics technology and packaging innovations. Future network evolution, emerging standards, and related research topics are also being considered.
- Network Architecture Review, Key Applications: 30-60 minutes
- Evolution, & Current Challenges: 30-60 minutes
- Current Technologies, and State-of-the-art system design: 60-120 minutes
- Emerging technologies, Innovation, and Trends: 15-45 minutes