Welcome!

You will be redirected in 30 seconds or close now.

ColdFusion Authors: Yakov Fain, Jeremy Geelan, Maureen O'Gara, Nancy Y. Nee, Tad Anderson

Related Topics: Java IoT, Microservices Expo, @CloudExpo, @DevOpsSummit

Java IoT: Article

SaaS Performance | @CloudExpo @Catchpoint #DevOps #WebPerf #AI #SaaS

SaaS providers need better tools to manage performance, enabling them to get ahead of problems

Managing Performance for SaaS-Based Applications

Software as a service (SaaS), one of the earliest and most successful cloud services, has reached mainstream status. According to Cisco, by 2019 more than four-fifths (83 percent) of all data center traffic will be based in the cloud, up from 65 percent today. The majority of this traffic will be applications.

Businesses of all sizes are adopting a variety of SaaS-based services - everything from collaboration tools to mission-critical commerce-oriented applications. The rise in SaaS usage has many positive benefits, but one drawback is that as demand grows, SaaS providers are having a harder time ensuring a high-performing (fast, reliable) end-user experience - at the same time performance expectations are growing higher than ever.

SaaS Providers Struggle to Meet Modern Day User Performance Expectations
Websites and applications must be fast. People want to read an article as soon as it's clicked; shoppers will abandon their cart on an ecommerce site if the website is too slow; customers expect to be able to quickly make transactions - and the list goes on. According to Google's latest research (February 2017), as page load time increases from 1 second to 7 seconds, the likelihood of a visitor abandoning the page increases 113 percent. For SaaS companies to be successful, they need to adopt the same laser focus on the end-user experience that has fueled the rise of ecommerce and B2B applications.

Meeting these performance demands is proving to be a significant challenge. One recent survey of SaaS providers found more than half reported difficulties in delivering sufficient performance levels, while more than a quarter said their organizations had incurred financial penalties as a result of unmet service-level agreements. Twenty-one percent of respondents whose organizations had an unplanned service interruption said it resulted in the loss of a customer relationship. Among those SaaS providers suffering financial penalties, the average cost incurred was $359,000 - not including the time and effort spent finding and fixing the source of performance declines.

Why Is Ensuring Superior Performance So Difficult?
Ensuring strong performance levels is proving difficult for several reasons:

Infrastructure Build-Outs - As many SaaS providers expand into new geographies and support more businesses, they are adding infrastructure as well as partitioning more existing systems. According to the SaaS provider survey mentioned above, almost half of respondents noted they have at least doubled the number of systems supporting their workloads and applications over the past two years. While this helps them support more customers, there is a nasty side effect - greater complexity, which makes it harder to find and fix the source of performance problems when they do occur.

Users' Geographic Expansion - Heightening these performance challenges is the growth of end users in new geographies. Application performance tends to deteriorate based on distance from the SaaS provider's data center. This means as end users move further away, there are more variables standing between them and the data center which can degrade performance - networks, ISPs, even browsers. This makes it impossible for SaaS providers to gauge worldwide performance levels for a specific SaaS user, based solely on the experiences of end users in just one or two geographies.

To expand and improve geographic reach, many SaaS providers build out their infrastructure, which in turn adds more complexity. The net result is a vicious cycle that ultimately results in decreased visibility into infrastructure health and end-user application performance.

The Capricious Nature of the Internet - Full SaaS provider outages are rare, but when they do happen, they can be disastrous. The Amazon S3 outage on February 28 served as the latest example, causing widespread availability issues for thousands of websites, apps, and IoT devices.

There have been other instances recently - in February 2016, Office 365 went dark for wide swaths of Europe. In August 2015, a Google data center in Belgium was struck by successive lightning strikes, causing problems for numerous Google cloud infrastructure service users. Not even tech giants like Amazon, Microsoft, or Google are immune to acts of God or nature. Inevitable outages are not the fault of these providers - rather, the "ripple effect" is the result of the tendency for business users to concentrate too much work in the hands of a single provider.

More common than full-blown outages are instances of SaaS applications just not performing well. Even though many SaaS providers have monitoring systems in place, 54 percent report the primary way they find out about performance problems is through customer complaints.

Managing Performance Must Be a Shared Endeavor
SaaS providers need better tools to manage performance, enabling them to get ahead of problems before their customers' end users are affected. This is a fundamental shift from traditional application performance management (APM) to what Gartner refers to as digital experience management (DEM). DEM treats the end-user experience as the ultimate metric, and identifies how the myriad of underlying services, systems, and components influence it. This approach is consistent with a recent EMA survey in which more than three quarters of respondents prioritized the ability to troubleshoot and analyze root causes of application performance problems, down to the platform level.

As the number of performance-impacting variables increases, SaaS providers may find themselves drowning in data, but searching for insights. They need advanced analytics to make data more actionable by identifying the cause of performance issues, swiftly and accurately. Sometimes performance issues are within the SaaS provider's control (such as when a particular region or data center requires more capacity). Other times, the issue may not be directly within the IT team's control - for example, a slow ISP or CDN. Even with these external factors, the information is still useful because time spent unnecessarily "war-rooming" can be avoided.

Business users of SaaS services should also be contributing to the performance management effort. They should consistently measure their SaaS providers' response levels, as well as monitor and measure real end- user performance at the closest points of geographic proximity. This is the key to gaining the most realistic view of end-user performance levels around the world. Having this information can help SaaS users uphold provider SLAs, but perhaps, more important, identify performance problems in advance and quickly determine if the problem is with the cloud service provider, or another component or internal infrastructure element.

Finally, as the recent Amazon S3 example demonstrated, SaaS users should always have a redundancy plan in place for their critical applications. In essence, they need to architect for failure. This may require an investment of time and effort, but can make all the difference in keeping SaaS-based businesses running smoothly when faced with a completely unpredictable event.

Conclusion
The growth in SaaS reliance is not going away, and SaaS providers cannot afford setbacks. When the SaaS platform that employees spend 80% of their day on goes down, productivity, and therefore a company's bottom line, takes a hit.

SaaS providers will continue to struggle with rapid growth and increasing complexity as business users port more mission-critical applications to the cloud and end users demand increasingly higher levels of performance and productivity.

This situation has the potential to escalate unless SaaS providers address the issue head-on with new approaches that evolve APM to DEM. Business users must also do their part. Ultimately, this can translate to more proactive, productive performance management, with less finger pointing and war rooming and more accurate, decisive issue resolution.

More Stories By Mehdi Daoudi

Mehdi Daoudi is the co-founder and CEO of Catchpoint Systems, a premier provider of web performance testing and monitoring solutions. His team has expertise in designing, building, operating, scaling and monitoring highly transactional Internet services used by thousands of companies that impact the experience of millions of users.

Before Catchpoint Systems, Mehdi spent 10+ years at DoubleClick and Google, where he was responsible for Quality of Services, buying, building, deploying, and using monitoring solutions to keep an eye on an infrastructure that delivered billions of transactions daily.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
In this presentation, Striim CTO and founder Steve Wilkes will discuss practical strategies for counteracting fraud and cyberattacks by leveraging real-time streaming analytics. In his session at @ThingsExpo, Steve Wilkes, Founder and Chief Technology Officer at Striim, will provide a detailed look into leveraging streaming data management to correlate events in real time, and identify potential breaches across IoT and non-IoT systems throughout the enterprise. Strategies for processing massive ...
SYS-CON Events announced today that Cloud Academy named "Bronze Sponsor" of 21st International Cloud Expo which will take place October 31 - November 2, 2017 at the Santa Clara Convention Center in Santa Clara, CA. Cloud Academy is the industry’s most innovative, vendor-neutral cloud technology training platform. Cloud Academy provides continuous learning solutions for individuals and enterprise teams for Amazon Web Services, Microsoft Azure, Google Cloud Platform, and the most popular cloud com...
In his session at Cloud Expo, Alan Winters, an entertainment executive/TV producer turned serial entrepreneur, presented a success story of an entrepreneur who has both suffered through and benefited from offshore development across multiple businesses: The smart choice, or how to select the right offshore development partner Warning signs, or how to minimize chances of making the wrong choice Collaboration, or how to establish the most effective work processes Budget control, or how to ma...
SYS-CON Events announced today that Enzu will exhibit at SYS-CON's 21st Int\ernational Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive advantage. By offering a suite of proven hosting and management services, Enzu wants companies to focus on the core of their ...
We build IoT infrastructure products - when you have to integrate different devices, different systems and cloud you have to build an application to do that but we eliminate the need to build an application. Our products can integrate any device, any system, any cloud regardless of protocol," explained Peter Jung, Chief Product Officer at Pulzze Systems, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
SYS-CON Events announced today that CA Technologies has been named "Platinum Sponsor" of SYS-CON's 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business - from apparel to energy - is being rewritten by software. From planning to development to management to security, CA creates software that fuels transformation for companies in the applic...
Amazon started as an online bookseller 20 years ago. Since then, it has evolved into a technology juggernaut that has disrupted multiple markets and industries and touches many aspects of our lives. It is a relentless technology and business model innovator driving disruption throughout numerous ecosystems. Amazon’s AWS revenues alone are approaching $16B a year making it one of the largest IT companies in the world. With dominant offerings in Cloud, IoT, eCommerce, Big Data, AI, Digital Assista...
Multiple data types are pouring into IoT deployments. Data is coming in small packages as well as enormous files and data streams of many sizes. Widespread use of mobile devices adds to the total. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists looked at the tools and environments that are being put to use in IoT deployments, as well as the team skills a modern enterprise IT shop needs to keep things running, get a handle on all this data, and deliver...
In his session at @ThingsExpo, Eric Lachapelle, CEO of the Professional Evaluation and Certification Board (PECB), provided an overview of various initiatives to certify the security of connected devices and future trends in ensuring public trust of IoT. Eric Lachapelle is the Chief Executive Officer of the Professional Evaluation and Certification Board (PECB), an international certification body. His role is to help companies and individuals to achieve professional, accredited and worldwide re...
With the introduction of IoT and Smart Living in every aspect of our lives, one question has become relevant: What are the security implications? To answer this, first we have to look and explore the security models of the technologies that IoT is founded upon. In his session at @ThingsExpo, Nevi Kaja, a Research Engineer at Ford Motor Company, discussed some of the security challenges of the IoT infrastructure and related how these aspects impact Smart Living. The material was delivered interac...
IoT solutions exploit operational data generated by Internet-connected smart “things” for the purpose of gaining operational insight and producing “better outcomes” (for example, create new business models, eliminate unscheduled maintenance, etc.). The explosive proliferation of IoT solutions will result in an exponential growth in the volume of IoT data, precipitating significant Information Governance issues: who owns the IoT data, what are the rights/duties of IoT solutions adopters towards t...
"When we talk about cloud without compromise what we're talking about is that when people think about 'I need the flexibility of the cloud' - it's the ability to create applications and run them in a cloud environment that's far more flexible,” explained Matthew Finnie, CTO of Interoute, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The Internet giants are fully embracing AI. All the services they offer to their customers are aimed at drawing a map of the world with the data they get. The AIs from these companies are used to build disruptive approaches that cannot be used by established enterprises, which are threatened by these disruptions. However, most leaders underestimate the effect this will have on their businesses. In his session at 21st Cloud Expo, Rene Buest, Director Market Research & Technology Evangelism at Ara...
No hype cycles or predictions of zillions of things here. IoT is big. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, Associate Partner at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He discussed the evaluation of communication standards and IoT messaging protocols, data analytics considerations, edge-to-cloud tec...
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists examined how DevOps helps to meet the de...
When growing capacity and power in the data center, the architectural trade-offs between server scale-up vs. scale-out continue to be debated. Both approaches are valid: scale-out adds multiple, smaller servers running in a distributed computing model, while scale-up adds fewer, more powerful servers that are capable of running larger workloads. It’s worth noting that there are additional, unique advantages that scale-up architectures offer. One big advantage is large memory and compute capacity...
Artificial intelligence, machine learning, neural networks. We’re in the midst of a wave of excitement around AI such as hasn’t been seen for a few decades. But those previous periods of inflated expectations led to troughs of disappointment. Will this time be different? Most likely. Applications of AI such as predictive analytics are already decreasing costs and improving reliability of industrial machinery. Furthermore, the funding and research going into AI now comes from a wide range of com...