Tutorials will be held at Cisco Building C, 150 West Tasman Dr.
AM: T1 in Loire (1st floor), T2 in Rogue (2nd floor)
PM: T3 in Rogue (2nd floor), T4 in Loire (1st floor)
TitleAccelerating Big Data with Hadoop and Memcached Using High Performance Interconnects: Opportunities and Challenges
SpeakersDhabaleswar K. (DK) Panda and Xiaoyi Lu, The Ohio State University
AbstractApache Hadoop is gaining prominence in handling Big Data and analytics. Similarly, Memcached in Web 2.0 environment is becoming important for large-scale query processing. These middleware are traditionally written with sockets and do not deliver best performance on modern clusters with high-performance interconnects. In this tutorial, we will provide an in-depth overview of the architecture of Hadoopcomponents (HDFS, MapReduce, HBase, RPC, etc.) and Memcached. We will examine the challenges in re-designing the networking and I/O components of these middleware with modern interconnects and protocols (such as InfiniBand, iWARP, RoCE, and RSocket) with RDMA. Using the publicly available Hadoop-RDMA (http://hadoop-rdma.cse.ohio-state.edu) software package, we will provide case studies of the new designs for several Hadoop components and their associated benefits. Through these case studies, we will also examine the interplay between high performance interconnects, storage systems (HDD and SSD), and multi-core platforms to achieve the best solutions for these components.
- Introduction to Big Data Applications and Analytics
- Overview of Apache Hadoop Architecture and its Components
BioPlease see speaker's bio here
TitleOpenstack & SDN - A Hands on Tutorial
SpeakerRamesh Durairaj, Oracle and Edgar Magana, PLUMgrid
DescriptionThe objective of this tutorial session is to provide you
- Introduction to Openstack - Technical architectural Overview of Openstack, premier Opensource Cloud IaaS framework.
- Technical Deep Dive of Openstack/Neutron (formerly called Quantum) Network Susbsystem
- Hands -on bring up a local Openstack Coud instance in your laptop
- Hands on - bringup of Openstack Neutron Service
- Developers overview of Openstack Neutron
BioRamesh (Ram) Durairaj
Ram Durairaj is an Architect and Technologist with over 18 years of experience in Data Center Technologies and Data Center Network Architectures. Ram has extensive experience in Programmable Networks, Grid Computing and Cloud Computing paradigms and expert in converged datacenter network architectures. While working at Cisco, Ram founded and engineering lead for Openstack@Cisco incubation project in Cisco CTO office - Cloud Computing.
Ram has been participating and represented Cisco in Openstack Inaugural Summit and Design Summits since its inception in July 2010. He is also one of the founding members of Openstack/Quantum(now known as Neutron) Project and was a Core Developer.
His past work experiences include Fabric7 Systems, Nortel Networks and Intergraph. Currently Ram is working as Senior Director at Oracle and leading Oracle's Software Defined Networking project and its applications in Cloud computing frameworks.Edgar Magana
Edgar Magana is currently a Sr. Member of the Technical Staff at PLUMgrid. He is in charge of the integration efforts between OpenStack Neutron and PLUMgrid Platform. Edgar worked over five years for the Chief Technology Office (CTO) of Cisco Systems as a Technical Leader and Researcher. He received his Ph.D. and M.Sc. in Computer Science from Universitat Politecnica of Catalunya, Spain. Currently, Edgar is a core member of the Neutron development team in OpenStack. He has an extensive experience on Cloud and Grid Computing, Policy-based Management Systems, Monitoring and Scheduling of network and computational resources on distributed networks. His research interest is related to Cloud Computing, Software Defined Networks (SDN), IaaS, PaaS and SaaS.
TitleThe role of optical interconnects in data-center networking, and WAN optimization
SpeakerLoukas Paraschis, Cisco
AbstractThe advent of virtualization of large shared clusters of compute and storage infrastructure has increased extensively the importance of "east-west" traffic flows inside a data-center. To optimize around these new traffic patterns, the DC architectures have been evolving towards a flatter hierarchy of more-densely interconnected switches in "fat tree" designs that can adjusting capacity more quickly, with more deterministic performance and greater manageability, using software-defined networking (SDN) abstractions. This inter-DC architecture evolution has been combined with new requirements for inter-DC networking systems with higher capacity, and higher port density. Optical technologies have increasingly become the main intra-DC interconnection solution for such high capacity, longer distance (>10s m) links, and a critical factor in the cost-performance optimization of the intra-DC networking fabric.
At the same time, the expanding availability of faster and more reliable broadband network connectivity has been enabling a wider proliferation of data-center based applications through an Internet-based service delivery model, referred collectively as "cloud" services. As a result, data-centers have become one of the largest contributors to the increased Internet traffic in the WAN. The DC networking has thus been evolving to meet the "cloud" service delivery requirements, leveraging an equally important, yet less often reviewed, amount of innovation in the inter-DC transport architectures. More specifically, new converged IP/MPLS with flexible DWDM transport architectures leverage advancements in routing, and photonics technologies, combined with multi-layer control-plane SDN automation, and WAN controller optimization, to improve operation, provisioning, restoration, and infrastructure utilization.
In this presentation, we review the key innovations in technology, system, and network architectures that enable intra-DC connectivity, and inter-DC transport to cost-effectively scale to the "cloud-era" requirements for more highly meshed networks with higher capacity, and more flexible SDN provisioning. Future network evolution, emerging standards, and related research topics will also be discussed.
BioLoukas (Lucas) Paraschis is senior solution architect in cisco's Americas next generation network group, primarily responsible for the evolution of converged transport architectures, WAN optimization, routing and optical technologies, business models, and market development efforts in Service Providers, large Enterprise, and Public Sector infrastructure. Prior to his current role, Loukas worked as an R&D engineer, product manager, technical leader, and business development manager for cisco's optical networking and core routing. He has been (co)author in next-generation transport networks of more than 50 peer-reviewed publications, invited, and tutorial presentations, two book chapters, two patents, and was an IEEE Distinguished Lecturer on this topic. Loukas received his Ph.D. from Stanford University, is a senior member of IEEE, and a Fellow of OSA.
TitleFlow and Congestion Controls for Multitenant Datacenters: Virtualization, Transport and Workload Impact
SpeakerMitch Gusat and Keshav Kamble, IBM
A Layer 2 to 5 Flow and congestion control (FCC) Framework for DCNs Covers Ethernet/CEE and IBA fabrics seen from the FCC angle. Comparisons
with TCP, in practice and theory, also including Incast and workload impact.
Where is FCC best introduced: On link layer (CEE/IBA), transport layer (TCP et al.), app. L5+ (HPC)? Which FCC schemes are 'better': Credits, PFC, window or rate controls?
- Physical DCN: L2 Fabrics IEEE 802 and IBA standardization results, translated into plain English. Why and how did IBA and Ethernet standard groups have chose their respective FCC schemes: credits, PFC, CCA and QCN? What are their pros and cons? Flow Ctrl) PFC vs. credits; Cong. Ctrl) QCN vs. CCA;
- Practical issues How are these schemes to be practically implemented by designers? How about configuration and tuning by users? Interaction between PFC and QCN? Comparison with TCP and ECN?
Virtual DCN: FCC in SDNs, from zero to virtualized CEE?
Overlay Networks: Currently all the virtual switches, vNICs, hypervisors
are lossy. Is this a feature or a bug? What happens to the application
performance by introducing a lossless vSwitch?
We prove with simulations and testbed platforms the challenges confronting today the DCN and SDN architects: Ranging from simple hotspot congestion baseline scenarios (IBA, 802) to HOL-blocking (low and high order), from input- (Hadoop-like) and output-generated (priority modulation, PFC and TES) congestion, to saturation trees. Also looks inside overlay networks, vSwitches and vNICs.
Outlook: Next Generation FCC for Physical and Virtual Datacenter Fabrics and Overlays.
a) FCC for Tbps CEE
b) FCC for SDN / Overlay Nets.