Click here to close now.

Welcome!

You will be redirected in 30 seconds or close now.

ColdFusion Authors: Yakov Fain, Maureen O'Gara, Nancy Y. Nee, Tad Anderson, Daniel Kaar

Related Topics: Java, SOA & WOA, ColdFusion, .NET, AJAX & REA

Java: Article

Load Testing in Clustered Environments

Clustered Environments: Load Testing for Architectural Validation (P.S. Don’t Extrapolate!)

Load and performance testing web applications will allow you to determine whether or not your deployment will require a clustered environment. When the test results show that the current throughput is restricted by the capacity of the server but target workloads are not yet met, this is a situation where you can achieve higher scalability by implementing clusters to your environment. Clustering achieves higher scalability by introducing more servers or nodes to expand the capacity of the environment. Obviously, the benefits of adding hardware include higher capacity, reliability, availability, and scalability. But also consider that clustering also adds complexity to your deployment by requiring added maintenance and an increased need for deployment/upgrade automation. To ensure quality of the environment you must always validate your clustered environment and prove out the increased scalability. Use a methodical performance testing approach. Don't try to extrapolate! It's not as easy as "3 nodes in a cluster will support 3x the workload."

Why Cluster?
An efficiently tuned deployment will, in turn, display an efficient use of server resources (memory, CPU, i/o, etc). Using a cluster increases the number of servers and distributes the workload amongst several servers. This even distribution of the workload can dramatically increase scalability. Not only can this improve the end user experience by reaching higher workloads with predictable response times but it can increase the reliability and stability of the deployment. The cluster acts as a single server so the loss or shutdown of any of the nodes in the cluster will not result in loss of sessions or application data. In the end, the user experience is less frequently interrupted and isn't affected by a single maxed out server or a loss of a server.

Tuning Tips
When performance or load testing your application uncovers a clear need to introduce clusters or farms to support the target workload, you will want to take into account the following considerations: First you should configure the cluster efficiently for internal maintenance such as data synchronization and heartbeat communications. User sessions which live in memory are more quickly failed over to another node in the cluster instead of persisting them to the database. However, writing the sessions to disk is more permanent which may have its own advantages. Make sure you have tested the performance prices for data synchronization and heartbeat communications. The goal is to configure the cluster to increase scalability with as little overhead as possible.

Load Balancers
Load balancers are generally placed out in front of the clusters. These load balancers can be a software solution or a hardware solution. Their job is to distribute the load evenly to the nodes in the cluster. Just as important, LB's reroute traffic when one node of the cluster goes down. This allows for the "transparency" of several servers acting as one. There are several more mature algorithms for distribution than traditional "round robins." Smarter LB's takes into account the CPU and resource usage and overall load of each server and their job is to direct the request to the least loaded server. The number of active users doesn't always equate to more resources being actively used, rather it depends on the types of transactions being executed - lightweight vs. expensive transactions. Smart LB's will detect workload and direct incoming traffic based on resource usage. Often LB's will use sticky sessions based on the client's cookie and/or IP address to route subsequent requests to the same node of cluster where the user session lives. Whenload testing these types of environments, it's a requirement to have a load tool which supports IP Spoofing. This is used to generate the load of many virtual users using multiple IP addresses all from a single machine. Otherwise, the total load would go to a single cluster node.

Types of Clustering
Clustering can be achieved using a few common techniques. Vertical clustering adds capacity to the deployment by installing multiple nodes of a cluster on a single machine. With this approach you must take into consideration the physical limitations of that machine (CPU, memory, i/o) and be careful not over utilize resources; otherwise adding more nodes becomes pointless due to saturation. Horizontal clusters refer to deploying more physical machines. With this approach, each physical machine can run one or more of the nodes of the cluster. Cloud bursting is a way of having a node both within the LAN and a node in the Cloud to be turned "on" during high volume usage or be strategically placed in different geographical locations. The appropriate technique really depends on the specifics of your environment. If you need more capacity and you have beefy infrastructure servers but do not have enough web servers or app servers to fully utilize the underlying hardware, choose the vertical clustering approach by adding more nodes to the same machine. On the other hand, if more physical resources are needed to handle the workload, then build out a horizontal cluster by adding more hardware and deploying more nodes.

How to Load Test a Cluster?
It's important to take a methodical approach to load testing a clustered environment. Load patterns such as ramping tests allow you to identify the current capacity as well as increased scalability as you add more nodes to the cluster. Remember that doubling the number of nodes in a cluster does not equate to doubling its capacity. Many components impact its performance gain such as the communications between the nodes used to just make the cluster work properly. The resource cost increases dramatically with the number of nodes. Capacity is relative and is dependent on myriad other components within the infrastructure. For example, adding another node to the cluster may give the application layer 2x the throughput (although this is not really possible due to "housekeeping" from internal administration overhead to maintain that cluster), but let's say the single webserver out front is already using all its worker threads, then requests will be queued while waiting for a thread to become available and overall throughput will not increase. Only through the analysis of load test results will you completely understand the increased scalability effects of a cluster. Consider another scenario: You have identified a need for building out a cluster of application servers, however you deploy too many nodes resulting in a backlog of requests on the shared database. Performance and load testing will uncover this vulnerability and many other potential scenarios that could otherwise go undetected. Having a comparison analysis feature built right into the load tool will allow you to run tests back to back, after turning on/off nodes in the cluster, and quickly visualize the differences. Also, having the tool with a built-in cloud load generation feature will save time and money setting up and maintaining the performance architecture environment, especially for high load tests.

The Right Approach?
Adding clustering to a deployment allows a web application to achieve higher workloads and gives the advantage of higher availability. However, you must conduct performance tests in order to build out an efficient cluster which meets your goals. Don't forget to weigh the benefits vs. added maintenance complexity/cost. Clusters require a high level of expertise to implement and maintain so they aren't the best solution in every situation. Make sure all moving parts are documented and insist on a complete architectural diagram for future systems administrators (diagrams to include hierarchical transaction pathways as well as location of each node in the cluster including the admin consoles). In the end it's all about delivering the best possible end user experience and in many cases clustering is an excellent solution for increasing scalability of your web deployments.

More Stories By Rebecca Clinard

Rebecca Clinard is a Senior Performance Engineer at Neotys, a provider of load testing software for Web applications. Previously, she worked as a web application performance engineer for Bowstreet, Fidelity Investments, Bottomline Technologies and Timberland companies, industries spanning retail, financial services, insurance and manufacturing. Her expertise lies in creating realistic load tests and performance tuning multi-tier deployments. She has been orchestrating and conducting performance tests since 2001. Clinard graduated from University of New Hampshire with a BS and also holds a UNIX Certificate from Worcester Polytechnic Institute.

@ThingsExpo Stories
The Workspace-as-a-Service (WaaS) market will grow to $6.4B by 2018. In his session at 16th Cloud Expo, Seth Bostock, CEO of IndependenceIT, will begin by walking the audience through the evolution of Workspace as-a-Service, where it is now vs. where it going. To look beyond the desktop we must understand exactly what WaaS is, who the users are, and where it is going in the future. IT departments, ISVs and service providers must look to workflow and automation capabilities to adapt to growing demand and the rapidly changing workspace model.
The true value of the Internet of Things (IoT) lies not just in the data, but through the services that protect the data, perform the analysis and present findings in a usable way. With many IoT elements rooted in traditional IT components, Big Data and IoT isn’t just a play for enterprise. In fact, the IoT presents SMBs with the prospect of launching entirely new activities and exploring innovative areas. CompTIA research identifies several areas where IoT is expected to have the greatest impact.
Wearable devices have come of age. The primary applications of wearables so far have been "the Quantified Self" or the tracking of one's fitness and health status. We propose the evolution of wearables into social and emotional communication devices. Our BE(tm) sensor uses light to visualize the skin conductance response. Our sensors are very inexpensive and can be massively distributed to audiences or groups of any size, in order to gauge reactions to performances, video, or any kind of presentation. In her session at @ThingsExpo, Jocelyn Scheirer, CEO & Founder of Bionolux, will discuss ho...
Cloud data governance was previously an avoided function when cloud deployments were relatively small. With the rapid adoption in public cloud – both rogue and sanctioned, it’s not uncommon to find regulated data dumped into public cloud and unprotected. This is why enterprises and cloud providers alike need to embrace a cloud data governance function and map policies, processes and technology controls accordingly. In her session at 15th Cloud Expo, Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems, will focus on how to set up a cloud data governance program and s...
As organizations shift toward IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. CommVault can ensure protection &E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his session at 16th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Partnerships, will discuss how to cut costs, scale easily, and unleash insight with CommVault Simpana software, the only si...
Hadoop as a Service (as offered by handful of niche vendors now) is a cloud computing solution that makes medium and large-scale data processing accessible, easy, fast and inexpensive. In his session at Big Data Expo, Kumar Ramamurthy, Vice President and Chief Technologist, EIM & Big Data, at Virtusa, will discuss how this is achieved by eliminating the operational challenges of running Hadoop, so one can focus on business growth. The fragmented Hadoop distribution world and various PaaS solutions that provide a Hadoop flavor either make choices for customers very flexible in the name of opti...
Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 16th Cloud Expo at the Javits Center in New York June 9-11 will find fresh new content in a new track called PaaS | Containers & Microservices Containers are not being considered for the first time by the cloud community, but a current era of re-consideration has pushed them to the top of the cloud agenda. With the launch of Docker's initial release in March of 2013, interest was revved up several notches. Then late last...
Roberto Medrano, Executive Vice President at SOA Software, had reached 30,000 page views on his home page - http://RobertoMedrano.SYS-CON.com/ - on the SYS-CON family of online magazines, which includes Cloud Computing Journal, Internet of Things Journal, Big Data Journal, and SOA World Magazine. He is a recognized executive in the information technology fields of SOA, internet security, governance, and compliance. He has extensive experience with both start-ups and large companies, having been involved at the beginning of four IT industries: EDA, Open Systems, Computer Security and now SOA.
HP and Aruba Networks on Monday announced a definitive agreement for HP to acquire Aruba, a provider of next-generation network access solutions for the mobile enterprise, for $24.67 per share in cash. The equity value of the transaction is approximately $3.0 billion, and net of cash and debt approximately $2.7 billion. Both companies' boards of directors have approved the deal. "Enterprises are facing a mobile-first world and are looking for solutions that help them transition legacy investments to the new style of IT," said Meg Whitman, Chairman, President and Chief Executive Officer of HP...
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focused on understanding how industrial data can create intelligence for industrial operations. Imagine ...
Operational Hadoop and the Lambda Architecture for Streaming Data Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing and analyzing streaming data is the Lambda Architecture, representing a model of how to analyze rea...
SYS-CON Events announced today that Vitria Technology, Inc. will exhibit at SYS-CON’s @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Vitria will showcase the company’s new IoT Analytics Platform through live demonstrations at booth #330. Vitria’s IoT Analytics Platform, fully integrated and powered by an operational intelligence engine, enables customers to rapidly build and operationalize advanced analytics to deliver timely business outcomes for use cases across the industrial, enterprise, and consumer segments.
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...
SYS-CON Events announced today that Open Data Centers (ODC), a carrier-neutral colocation provider, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Open Data Centers is a carrier-neutral data center operator in New Jersey and New York City offering alternative connectivity options for carriers, service providers and enterprise customers.
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...
PubNub on Monday has announced that it is partnering with IBM to bring its sophisticated real-time data streaming and messaging capabilities to Bluemix, IBM’s cloud development platform. “Today’s app and connected devices require an always-on connection, but building a secure, scalable solution from the ground up is time consuming, resource intensive, and error-prone,” said Todd Greene, CEO of PubNub. “PubNub enables web, mobile and IoT developers building apps on IBM Bluemix to quickly add scalable realtime functionality with minimal effort and cost.”
Sensor-enabled things are becoming more commonplace, precursors to a larger and more complex framework that most consider the ultimate promise of the IoT: things connecting, interacting, sharing, storing, and over time perhaps learning and predicting based on habits, behaviors, location, preferences, purchases and more. In his session at @ThingsExpo, Tom Wesselman, Director of Communications Ecosystem Architecture at Plantronics, will examine the still nascent IoT as it is coalescing, including what it is today, what it might ultimately be, the role of wearable tech, and technology gaps stil...
In the consumer IoT, everything is new, and the IT world of bits and bytes holds sway. But industrial and commercial realms encompass operational technology (OT) that has been around for 25 or 50 years. This grittier, pre-IP, more hands-on world has much to gain from Industrial IoT (IIoT) applications and principles. But adding sensors and wireless connectivity won’t work in environments that demand unwavering reliability and performance. In his session at @ThingsExpo, Ron Sege, CEO of Echelon, will discuss how as enterprise IT embraces other IoT-related technology trends, enterprises with i...
When it comes to the Internet of Things, hooking up will get you only so far. If you want customers to commit, you need to go beyond simply connecting products. You need to use the devices themselves to transform how you engage with every customer and how you manage the entire product lifecycle. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, will show how “product relationship management” can help you leverage your connected devices and the data they generate about customer usage and product performance to deliver extremely compelling and reliabl...
The Internet of Things (IoT) is causing data centers to become radically decentralized and atomized within a new paradigm known as “fog computing.” To support IoT applications, such as connected cars and smart grids, data centers' core functions will be decentralized out to the network's edges and endpoints (aka “fogs”). As this trend takes hold, Big Data analytics platforms will focus on high-volume log analysis (aka “logs”) and rely heavily on cognitive-computing algorithms (aka “cogs”) to make sense of it all.