Posted by & filed under List Posts.

In the modern world, everything seems to be in a perpetual state of flux. There is perhaps no field to which this omnipresent change is more central than computing. Here are 11 IT trends and how to be prepared as they transform data centers in 2017.

 

  • Introduction: Information technology comes of age
  • IT trends becoming more prevalent in 2017
  • Innovative and responsive high-performance infrastructure

 

The long-range trends that are reshaping data centers through 2020 – limitless infrastructure, unceasing business needs, and an evolution of control – can sometimes seem beyond challenging. However, IT leaders must be prepared.

 

 

Introduction: Information technology comes of age

It’s impossible to know exactly what will happen in the coming years, but one trend that is impacting business at all levels is the transition from a mechanistic to an informational approach. This transformation has involved an overhaul of policies and procedures, management expectations, internal roles, and company culture.

 

IT is of course not new but is maturing, delivering a major impact to the consumer and business worlds at each phase of its development. Russian-American sociologist Pitirim Sorokin believed that the rise of the “information age” represented a radical cultural revolution that resembles the inception of agriculture or the advent of the scientific era – although more shocking because of its sheer speed.

 

The blisteringly fast increase in knowledge that is so readily acknowledged today was addressed by William Conboy in the 1960s, noted Chris Anderson of Bizmanualz. “Conboy estimated that the amount of knowledge in existence doubled between 1 AD and 1750,” reported Anderson. “Knowledge doubled again by 1900, 1950, 1960, and Conboy projected it to double again by 1963 and beyond.”

 

IT trends becoming more prevalent in 2017

Lists of trends are sometimes not given the credit they deserve. Yes, the Internet does become a bit obsessed with trends. However, genuine analysis of how the industry is evolving is invaluable.

 

For instance, esteemed analyst David Cappuccio listed top trends for IT decision-makers to use in their strategic plans at Gartner Symposium 2016.

 

One thing that is certainly changing is the perspective toward what is possible within a data center. Business leaders increasingly expect internal infrastructure to resemble the expense and scalability of high-performance public cloud.

 

Trends mentioned by Cappuccio include:

 

  1. Data centers aren’t over

The on-premise data center is on the decline. By the end of the decade, 4 out of every 5 workloads will run off-premises, in Cappuccio’s estimation. These workloads are occurring through a patchwork of third-party locations.

 

Hybrid means flexibility but not simplicity, noted Cappuccio. “As workloads move off premise, our lives are not getting easier,” he said.

 

It’s necessary for CIOs and directors to pay close attention to key performance indicators (KPIs), regardless whether systems are on- or off-premise.

 

  1. The fabric is growing

To foster resilience in a data center, disparate assets are peered within a multitenant fabric. Interconnect fabrics now additionally peer sites that are remote.

 

The notion of fabric is to increase availability for better service continuity and, in turn, stronger UX.

 

What encompasses a firm’s IT architecture is, effectively, broadening, noted Cappuccio. Infrastructure “is not just on prem, but [involves] all services provided to customers,” he said, adding that IT succeeds when “services are delivered from the right place, for the right price, from the right platform.”

 

  1. Stop: it’s container time

Let’s face it: containers are too legit to quit. Increasingly popular in development and DevOps, they allow apps to be partitioned as microservices for deployment on virtual or physical servers. It’s up to IT to supply the backend and support for this breakthrough model.

 

Containers are tricky because they are ready-made for scalability, but they are also characterized by impermanence – sometimes only existing for a split-second. Orchestration and automation will be key to managing container workloads.

 

  1. Business drives IT

More and more, heads of business departments are looking beyond the boundaries of their organization for their high-performance infrastructure needs. In fact, Gartner-verified data shows that nearly a third of IT dollars (29%) are spent on off-prem solutions.

 

As the cloud rolls in (leading, of course, to fog computing), IT will gradually transition to a more consultative role in brokering or curating services that better support the immediate needs of business.

 

  1. The service-oriented approach

IT can be viewed as a service provider. To extend the brokering or curation idea, the responsibility of the datacenter is about finding the services that most meet continuity, latency, security, compliance, RTOs (recovery time objectives), and other factors.

 

  1. Waste management

Other research validated by Gartner estimates that ghost servers – which are active but serve an unjustifiable purpose – make up 28% of corporate infrastructure. Similarly, two out of five racks (40%) aren’t fully provisioned.

 

Designation of racks for certain departments is a primary reason for this misuse of resources, explained Cappuccio. “We must put more governance in place to understand what’s running and why,” he said.

 

Here are a seven ways you can limit this form of waste:

 

  • Right-sizing your resources (provision to fit the job)
  • Tagging workload lifecycles for company-wide monitoring
  • Avoiding data egress so pointless copying doesn’t occur
  • Throttling workloads that are underused
  • Evaluating price structures to verify they are logical;
  • Prioritizing open source management programs; and
  • Recycling stranded resources.

 

  1. Expansion of IoT

To understand the emerging scope of the Internet of Things, think twenties: by 2020, 20 billion devices will be online. To understand the security challenge, consider how popular IoT endpoints such as thermostats and webcams are as DDoS botnet slaves.

 

“IT must start thinking about an infrastructure to support IoT,” said Cappuccio. Networking and interoperability are key issues to address.

 

  1. Need for IoT management

Use of the IoT could come with extraordinary IT labor and administrative costs. Think in terms of installation, registration, calibration, testing, maintenance, and eventual disposal.

 

Yes, the IoT bears similarity to other data center needs, such as edge computing or bandwidth improvements, but when you map the future of the IoT, the most shocking trait is its scale.

 

  1. Building on the edge

Don’t look down. The centralized nature of infrastructure is being modified rapidly to better serve the business. Placing workloads in greater proximity to your users – especially in the era of real-time IoT needs – better contributes to high-performance infrastructure.

 

Edge computing or microcomputing sites are powerful in these scenarios. User management, distribution, and synchronization are all fields of knowledge that can help architects prepare.

 

  1. Up-and-coming IT roles

As IT continues to shift and reshape, giving rise to new responsibilities, what once were novel roles are becoming increasingly commonplace.

 

Cappuccio highlighted these six:

  • IOT architect – Processing, networking, and management of your IoT.
  • Cloud sprawl manager – Cost containment for stranded per-use resources.
  • Strategy architect – Refinement in delivering high-performance infrastructure to meet business objectives.
  • Capacity recovery/optimization director – Aligning resources with needs in a parallel function to sprawl managers.
  • Vendor broker – Grasp of available providers’ performance, cost, and SLAs.
  • End-to-end/performance manager – These roles “reflect the growing importance of workload performance and user satisfaction management in the enterprise,” said Stephen J. Bigelow of TechTarget, paraphrasing Cappuccio. “Knowing that each aspect of an application is running well can offer early warning for potential problems, as well as insight for improvement.”

 

Innovative and responsive high-performance infrastructure

 As we look toward how business computing will evolve through and beyond 2020, the focus will be on ramping performance and flexibility through hybrid, on-prem, and off-prem systems.

 

At Total Server Solutions, we provide a performance infrastructure and thoughtfully engineered services that function as a whole. Check out our solutions.

Posted by & filed under List Posts.

E-commerce itself is trending

Yes, the title of this piece is grandiose. However, it may not be an exaggeration. Increasingly, the extent to which a company understands e-commerce is reflected by their success.

Take this statement from Cusha Sherlock of Credit-Suisse: “The e-commerce industry is a force that no investor can afford to ignore.” Also consider this forecast from eMarketer based on the available industry data:

  • 2016 retail e-commerce revenue – $1.915 trillion
  • Expansion will be at or above 10% each year through the end of the decade.
  • Revenue will exceed $4 trillion in 2020.

Now, of course, e-commerce is just a fraction of the global retail market, which totals $22.049 trillion worldwide (and is currently growing at 6.0% annually). However, consider that e-commerce retail is growing at a faster rate than general retail (10 vs. 6), and that it represents an increasingly larger share of the overall figure each year. It’s currently at 7.4% of worldwide retail, and the eMarketer projection estimates it to hit 14.6% in 2020.

The bottom line is that it’s time to pay closer attention to e-commerce to future-proof your business. Let’s look at six top trends:

 

Trend #1 – Smoke-signal analytics

Analytics essentially becomes more sophisticated when it ties in signals related to each different possible avenue a customer can go. That’s the focus of a software such as Kissmetrics, which gives a signal to each source of traffic, tracking users from source to sale (or abandonment, or bounce).

 

Trend #2 – Real-time proposals & engagement

How important is it to interact with customers? Well, it will make you more money.

Gallup describes customers as fitting within three different categories: completely engaged, neutral, and completely disengaged. The pollster’s research reveals that engaged users go to e-commerce sites 44% more than disengaged ones do, and their total purchase is higher (373 USD, compared to 289 USD for a customer that is disengaged).

As businesses realize the importance of engaging visitors in the new year, they will use strategies such as these to do so:

  • Incorporating customer stories into the blog
  • Replying to customer concerns through video or a short piece
  • Sending e-mail newsletters with exclusive loyalty-based discounts
  • Consistently posting helpful information to social and on-site.

Businesses will review and potentially upgrade their adoption of live help desk platforms, advised Michael Lazar of Engadget. “This solution actively engages customers and allows them to ask questions via an online chat system, social media, phone, message and more,” he said. “Tickets are created that the customer support team can respond to in real-time.”

Why do you want to adoption an in-the-moment solution to serve prospects? If you think people like to do it by themselves online, well, that’s not the case: according to a six-nation survey of 5700 digital shoppers, 87% want support at some point during their buying journey.

 

Trend #3 – The money in omnichannel

There are many reasons why people have hesitated to buy products on their cell phones and tablets: the desire to use a bigger screen; concerns over privacy and security; difficulty with the vendor’s app or mobile site; etc.

However, the world has gone increasingly mobile in the past five years. It should be a shock to those who think of the Internet as fundamentally a network of personal computers that recent research revealed 56% of visitors to top sites are on a mobile device.

People still don’t want to use their smartphone or their iPad to buy, though. Robert Allen of Smart Insights described the findings of a study from 2016: “[A]lthough mobile (phone and tablet) accounted for 59% of all sessions by device on eCommerce sites, these mobile browsers made up just 38% of revenue,” he said. “Desktop was still dominating for conversion even though mobile browsing is the norm for research.”

The important thing to realize about your company is that its connection to customers is holistic rather than truly broken up into different channels. That’s the basic guidance behind the notion of the all-inclusive, omnichannel sales approach. After all, the study above also discovered that when a large portion of a company’s traffic is mobile, it comprehensively achieves better conversion. The user is checking out your product on their phone before they open their laptop or go to their desktop and buy. In other words, mobile users are buyers and should not be undervalued.

 

Trend #4 – The encroaching singularity

You may know of best-selling author Ray Kurzweil, whose concept of the singularity suggests that artificial intelligence will suddenly generate rampant and rapid-fire growth in technology, bringing about systemic social alterations.

Well, artificial intelligence is indeed growing. Kit Smith of social media monitoring firm Brandwatch noted that AI digital-assistant tools such as Google’s Assistant, Microsoft’s Cortana, Apple’s Siri, and Amazon’s Alexa are changing the playing field for online sales. “This will impact E-commerce as the beginning stage of the research process may be increasingly conducted by chatting to a personal assistant,” he said. “Ecommerce brands will need to keep an eye on how these developments change the buyer journey and adapt.”

 

Trend #5 – Subscription fever

The rise of cloud-based SaaS programs has popularized and proven the subscription model for all e-commerce parties. It means consumers aren’t required to commit as many funds upfront and that companies are able to keep bringing in revenue over time.

No one said that subscriptions had to only be for technology, though. Some e-commerce companies have been popping up that embrace subscription purchasing of physical products, explained Allen. “Founded just five years ago in 2011, the dollar shave club, a prime example of a subscription-based eCommerce site, is now worth an incredible $615 Million dollars!” he said. “From nothing to $615 million dollars in five years. Just selling razors.”

 

Trend #6 – Chatbot mania

Get ready everyone: the Internet is getting ready to enter an era of chatbot mania. Because the chatbot is not just any bot. To be serious, this marketing sidekick is a critical one that could be described as introductory in 2016 and emergent in 2017.

Chatbots are not a completely new idea, but they became a trend because of their rising adoption – making them deserve attention from marketing influencers and executives.

Julia Carrie Wong wrote a great article for The Guardian on chatbots in April 2016. It talks about how these new non-human AI virtual assistants emulate sales and service interactions.

The sort of case study Wong uses to express the increased focus on chatbots is the Kik Bot Shop – a spinoff of the messaging app Kik.

The service, presented in Fortune as a sort of app store for bots, started in April with 16 bots from brands such as Funny or Die, Vine, the Weather Channel, and H&M. The platform was open, so anyone could develop one for it (assuming they follow Kik’s ban on “adult” content).

“Without a chat bot, a user might direct his browser to weather.com, then type in their zip code to get the forecast,” explained Wong. “With the Kik’s Weather Channel bot, a user can send a chat asking for ‘Current Conditions’ or a ‘3-Day Forecast’ and the bot will reply with your answer.”

How big are bots? There were more than 20,000 bots on Kik by August 3, just four months after the platform’s creation.

*****

Hopefully the above trends give you a better idea of how e-commerce is growing and changing. As you consider the ways the field is changing, remember that web hosting is a fundamental tool that can deliver the high-performance customer experience to propel your growth.

At Total Server Solutions, we support all of the top shopping cart applications, and we also offer merchant accounts so you can sell and accept payments quickly and easily. Get started now.

Posted by & filed under List Posts.

Do you want to be trendy with your content management system (CMS)? Well, you might not. But you certainly want to know how the playing field is changing. Here are the top emerging WordPress trends for 2017, in terms of strategies you can take with your own installation of the CMS.

 

A new year is important on a business level. Holiday bonuses mark the end of a successful, profitable year, for instance. More similarly to personal resolutions, though, are the many trends list that are released for virtually every segment of the economy. Essentially, looking over these trends, businesses can consider how their market is changing and think, “What are our resolutions this year?”

 

 

#1. Customize thy design.

Everyone wants to use a set of standardized, trusted tools – but, at the same time, stand out and express their own unique vision.

With that in mind, consider the impressive data from W3Techs Web Technology Surveys: “WordPress is used by 58.5% of all the websites whose content management system we know,” reported the service in December 2016. “This is 27.2% of all websites.”

In other words, WordPress is incredibly popular. Nonetheless, competitive advantage in business is fundamentally about differentiation. As more and more people move to WordPress, because of its popularity, people on WordPress will want to modify their sites. JavaScript, HTML5, and other markup/coding languages will be used for customization.

 

#2. Scroll to the future.

Scrolling became more predominant in 2016 because of increased mobile use. Scrolling is more necessary on smartphone screens that have less space to display information. Incredibly, it seems that from getting used to scrolling on smartphones, people became more likely to scroll on their desktops.

“The return of the scroll as an accepted user pattern provides more flexibility in the design and gives you more chances to interact with each visitor,” noted Carrie Cousins of design tutorial publication Design Bombs. “Think of all the opportunities to play games, include scrolling features (parallax!) and develop other creative ways to tell your story.”

Cousins added to deeply ponder UX when you are thinking about putting together a long-scroll WordPress page. Make sure that you prefer scrolling to clicking as the way to access content.

 

#3. Abide by SSL Everywhere.

Google wants SSL across the Internet. Although SSL certificates are far from perfect, Google seems them as (of course) a major improvement over an unencrypted site.

Roshan Perera of theme site Theme Junkie noted that the search giant now views sites lacking HTTPS protocol (generated automatically by SSL) as unsafe: “Starting 2017, HTTPS will be mandatory for all websites, including WordPress powered websites,” he said. “If you’re building a new WordPress site in 2017, you better Implement HTTPS for your WordPress site from the very beginning.”

 

#4. Leave behind desktop-first design.

It’s become increasingly clear over the last few years how important it is to cater to mobile users through strategies such as mobile ads and responsive design (the latter of which alters the way the site populates to harmonize with the user’s device). In fact, there are now more smartphones and tablets accessing the Web than desktop computers: StatCounter’s analysis for October 2016 found that mobile devices represent 51.26% of access, while desktop represents 48.74% — the first time desktop has ever been outdone.

Google is actually now prioritizing any sites, WordPress and otherwise, that take a mobile-first approach. The extent to which the data the search engine uses to rank your site will rely on mobile is a bit shocking. “Although our search index will continue to be a single index of websites and apps,” Google announced in November 2016, “our algorithms will eventually primarily use the mobile version of a site’s content to rank pages from that site, to understand structured data, and to show snippets from those pages in our results.”

 

#5. Ride the wave of parallax.

There’s a design technique called parallax (noted by Cousins in tip #2 above, and used by astronomers) that means you are essentially implementing various backgrounds, moving at different speeds, in order to create the illusion of 3D.

Say what? Here are more than two dozen examples of parallax in action, as featured by Awwwards.

 

#6. Get in on the typographic revolution.

Similar to the general trend toward customization mentioned above (tip #1), owners and managers of WordPress sites want text that stands out and captivates prospects. Developers can get creative with fonts using Google or Adobe’s font-creation apps, advised Review Squirrel, after which the finished products can be imported into WordPress.

 

#7. Embrace UI elements in a container format

Cards and other standard container elements are well-suited to mobile-friendly responsive design. Cards are great for intelligently storing data, with a single element in each container. Every box is a CTA, in a sense, asking that the site viewer put in their email, click a video, or purchase something.

Note: Cards can be modified for better compatibility with whatever design you want.

 

#8. Show, don’t tell.

Video has become less optional and more fundamental, with 2016 potentially serving as a tipping point. Yes, perhaps viewers have become hungrier for motion pictures of whatever sort over time, but the ramp-up in video is more about more efficient server technology, HD screens, and faster high-speed Internet.

Be careful with your video creation, noted Cousins. “Users demand high-quality action that tells a story,” she said. “From short snippets to more cinematographic-scale production, users will only pause to watch a video that’s good.”

Specific formats you could try, based on their effectiveness for other companies, are a short loop highlighting an item you’re selling or a more highly produced infotainment piece. Pro tip: Be sure to incorporate closed captioning.

 

#9. Review and adopt SaaS WordPress plugins.

The Internet has a fever, and the only prescription is more cloud computing! Now, keep in mind, storing your site on a cloud server for affordable high performance – IaaS (infrastructure as a service) – is just one way this virtualized model is being used for WordPress, with plugins aggressively switching to the SaaS (software as a service) model.

Perera commented that some in the open source crowd don’t like this SaaS plugin trend; however, it does create a market for strong WP tools such as OptinMonster and SumoMe.

 

#10. Set up your WordPress lemonade stand.

Review Squirrel noted that trusted ecommerce platforms are being integrated into WordPress sites as a standard expectation.

In this way, WP is increasingly becoming an environment through which people can more quickly and effectively make cash for their businesses.

 

#11. Tiny is huge.

Shift your business to the cutting edge of Web appearance with miniature logos and micro-designs that take a cue from watch design.

This trend is a step in the reverse direction from previous efforts to optimize for mobile by increasing scale of visuals on the site, Cousins advised. “The best part of this concept is that every design element must be created with intent and purpose,” she said. “There’s no place for elements that don’t help the user reach a goal.”

*****

Do you want to keep up with the changing WordPress landscape? As indicated above (tip #9), the first step is to supercharge your WordPress power with a high-performance, painstakingly maintained IaaS infrastructure. At Total Server Solutions, we believe that a cloud based solution should be scalable, reliable, fast, and easy to use. Get started.

 

Posted by & filed under List Posts.

Total Server Solutions is a class-leading provider of high performance infrastructure and enterprise solutions. We are currently seeking outstanding candidates to work as part of our Atlanta based development team. We have an opening for a Cloud Operations Developer to work with a team to rapidly develop and deploy microservice applications. The ideal candidate should have experience with configuration management, an understanding of current coding architecture, and be able to translate high-level requirements into a working product. Total Server Solutions places a high value on candidates who possess initiative, work well within a team environment, and handle pressure with grace. Importantly, candidates need to demonstrate an ability to ship code rapidly, meet deadlines, and manage time effectively.

Responsibilities
The successful candidate will be responsible for working as part of our development team to help build PAAS products, as well as working to enhance the fitness of our internal systems and APIs. Your primary role would be a full stack developer. However, we are interested in someone who doesn’t mind wearing a DevOps hat when necessary to bring together internal components for Continuous Integration / development pipeline maintenance, or for configuration management purposes both internally, and for customer-facing services.

  • Collaborate with a project manager and other developers to define and implement solutions for applications.
  • Present and defend solutions and milestones to peers.
  • Act as “go to” with firm knowledge of best practices and industry standards.
  • Initiate, suggest, and take charge of major sections of new and existing projects.
  • Conduct research and evaluate user feedback.
  • Maintain deployed applications and perform periodic code audits.
  • Work to improve performance and security of any internally developed systems.
  • Work with staff and customers to better understand how our applications can be improved.

Requirements

  • Experience working on an agile / RAD team.
  • Experience with popular design patterns specifically MVC.
  • Business-driven project mentality to meet deadlines and produce code.
  • Experience with configuration management.
  • Experience working in a linux environment.

Software / Programming Languages
Programming Languages

  • Python
  • PHP 7
  • Software
  • GIT

Additional Experience (not required, but a huge plus)

  • Computer Science degree
  • Programming Languages
  • Golang
  • Software Packages
  • Docker
  • Mesos / Kubernetes / Terraform / etc.
  • Saltstack / Ansible / Puppet / Chef / etc.
  • Jenkins
  • Confluence / JIRA
  • Amazon Web Services

*This position is available within Atlanta only. Telecommuting is NOT possible with this position and relocation assistance is not available.

If you have what it takes, please send a resume to careers@totalserversolutions.com with the subject set to “Cloud Operations Developer.”

 

If you have what it takes, please send your resume & cover letter to careers@totalserversolutions.com with the subject heading of “Cloud Operations Developer.”  That is the ONLY way we will see your resume with regards to this career post.

Posted by & filed under List Posts.

What are the top trends for cloud in 2017? Let’s look at the most critical ideas from thought-leaders in IT research and journalism.

 

At the turn of the year, people commonly have a tendency to take stock of the situation. Personally, people write New Year’s resolutions on losing weight, quitting smoking, and other aspects of self-improvement. In business, we think about how our industry and the markets that support it might be changing.

 

In that spirit, let’s look at cloud computing trends – in brief and more exhaustively.

 

 

Short-list of 2017 cloud trends

 

Containers, lift-and-shift, and SaaS specialization are trends that Forrester considers key to the field for 2017. But the big news is increasing adoption among the largest firms. “The No. 1 trend is here come the enterprises,” explained Forrester’s lead author, Dave Bartoletti. “Enterprises with big budgets, data centers and complex applications are now looking at cloud as a viable place to run core business applications.”

 

Here are 10 additional trends that the research group announced as the most important for the New Year:

2.  There will be increasingly diverse options beyond the megaclouds.

3.  Cloud service providers (CSPs) will start to include higher security for a more turnkey product.

4.  Buyers will lower their costs with cloud more extensively than through pay-per-use.

5.  Lift-and-shift functionality will become more sophisticated, streamlining cloud migration.

6.  Networking will continue to be the most vulnerable piece of hybrid cloud.

7.  Companies will become less interested in expensive and complicated private cloud platforms.

8.  SaaS will become further specialized to better fit different sectors and geographical locations.

9.  Chinese companies will emerge as major players in worldwide cloud development.

10.  Hyper-convergence will improve the viability of private clouds.

11.  The proliferation of container storage will cause disruption in cloud management.

 

Expansion of short-list + 4 bonus cloud trends

For a better understanding of how the cloud computing industry is getting ready to change in the New Year, let’s take a deep-dive into a few Forrester trends (parallel to the items bolded in the list above). Then we’ll expand those ideas with a few additional projections from Information Age.

 

Small rises with megacloud (#2 above)

In an effort to save time and money, IT heads who previously elected to create a private cloud will find themselves warming to public environments.

 

Capital One is one of the big-name players to early-adopt public cloud. “We recognized that we were spending a lot of time, energy, effort and management bandwidth to create infrastructure that already exists out there in a much better state and is evolving at a furious pace,” said Rob Alexander, the bank’s CIO.

 

The statistics are with Alexander. Anyone familiar with IT growth projections can tell you that public cloud is skyrocketing. An $87 billion market in 2015, it is expected to exceed $146 billion in the New Year. Currently, Forrester clocks the cloud, as an expanding economy, at 22 percent CAGR.

 

Beyond comparison of cost and focus on support, this speed is simply too rapid to allow the megacloud providers to meet every business’s needs. Smaller and regional IaaS providers will become bigger business in 2017, said Bartoletti – who recommends being open-minded and embracing the implementation of multi-cloud.

 

Better control of cloud expenses (#4 above)

Cloud can be a more affordable choice, but there are unforeseen expenses. Specifically, management can be tricky with a complex multi-cloud. Also, many companies keep public cloud running on Saturdays and Sundays, when they aren’t being used.

 

During the New Year, CIOs will improve in their ability to keep cloud costs down. Bartoletti noted, “There’s no reason in 2017 for your cloud costs to grow out of control.” He gave the example of a software firm that cut its cloud cost by 12% (reduced from $2.5M to $2.2M) simply through monitoring use.

 

Lift-and-shift cloudification (#5 above)

 

Companies will find value with lift-and-shift tools, using them to prepare apps for migration – rather than placing legacy apps in the cloud or manually rewriting the code.

 

You put your code in there (#11 above)

 Containers allow you to manage code, especially in the case of cloud apps. Linux containers are becoming more commonplace. Containers require a thorough review and reshaping of security, networking, storage, and monitoring. Bartoletti said businesses will look at positives and negatives of setting up their own private PaaS in contrast to using a managed public cloud platform.

 

Enterprises embrace cloud (#1 above)

IT decision-makers are increasingly opting to host key apps in public cloud. “Enterprises are turning great ideas into software and insights faster,” said Bartoletti, “and the cloud is the best place to get quick insights out of enterprise data.”

 

Trend #12 – Building the cloud

Designing cloud architecture and aligning yourself with best practices for cloud migration both require a new skill-set beyond the ability to design primary on-premises infrastructure.

 

In a public cloud setting, companies aren’t able to adjust configurations to meet the specifications of their service or app. Rather, they are given a standardized toolset that requires integration, noted Information Age editorial director Ben Rossi.

 

“It’s the difference between cooking for yourself from raw ingredients,” he said, “and ordering in a restaurant where the chef has set the menu and you choose the meal, associated ambience and service quality to suit your budget.”

 

Businesses will refine their grasp of cloud architecture so that migrations are easy, seamless, and problem-free.

 

Trend #13 – The dynamic multi-cloud

The world of cloud is not just about combining different services but using them dynamically. Currently, most workloads are put in place with a single provider. In 2017, dynamically shifting from one CSP to another will more frequently become a way firms assess various options.

 

With that in mind, Rossi noted that wise companies will be building cloud services so that they are easily customizable to various platforms and infrastructures – facilitating easier moves between providers without disrupting your services.

 

Trend #14 – Transparent source

Open source is becoming the standard in cloud. You get access to a toolset that allows you to host and manage cloud via a disparate but helpful support network. The OS basis means you are able to get a relatively full-featured cloud system for free – paying primarily for the resources.

 

Trend #15 – Security and auditing safeguards

Shifting your systems to the cloud can feel like an effort to push security best-practices responsibility to another party. It is true that stronger CSP security oversight will be typical (#3 above).

 

However, it’s still critically important to verify that the CSP has a commitment to data security. Furthermore, it’s wise to audit the firm so you know that guaranteed precautions are active.

 

Companies will become better able to determine which suppliers they want to use, and they will look for ways to verify their infrastructure. Firms will also be more careful in reviewing policies for data security and governance.

 

“This will become ever more important in the light of the forthcoming GDPR regulations,” said Rossi, “and a written definition of all the data security policies and procedures will be required by the regulator when they conduct an audit.”

 

*****

As you can see, public cloud is growing and evolving rapidly. In this expanding field, you want a provider that deserves your business and can fuel your growth.

 

At Total Server Solutions, we are SSAE-16 Type II audited, and SSD lets us provide you with the guaranteed levels of performance that you demand. Order your cloud.

Posted by & filed under List Posts.

*** Breaking SSL Security News ***

Yes, major hacks of huge enterprises are disconcerting and deserve attention. But what’s perhaps even more distressing is an Internet-wide trend of security best-practices neglect. Consider this: an eye-popping 35% of websites are using an SSL certificate with the outdated, proven-unsafe Secure Hashing Algorithm 1 (SHA-1) algorithm. That’s a total of 61 million websites.

 

  • More than 1/3 of websites use a bad cert
  • Should you be very afraid of SHA-1?
  • Must-know info on the various SHA types
  • Why are we hitting the SHA-2 migration PANIC-BUTTON?

 

TSS

 

It’s easy for people to point fingers when it comes to Internet security. After all, like a FAIL video, it provides a sort of dark entertainment to look at the very public embarrassments of large enterprises and others that have been hacked. From Sony to Target, from Home Depot to the US State Department to worldwide financial institutions, breaches in security have become so commonplace that people often forget their incredible cost, in terms of loss of business (think Sony being thrown almost completely off the Internet) and loss of reputation (think Anthem, which states on its homepage, “Anthem is a trusted health insurance plan provider” – well, maybe).

 

The focus on these huge companies makes us forget the extent to which all companies are at risk, including simple blogs, startups, and other SMBs. Let’s look at a specific way that websites are making their users’ data vulnerable, making it clear how critical SSL upgrading today really is.

 

More than 1/3 of websites use a bad cert

Amazingly, a study by cryptographic key protection firm Venafi reveals that 35 percent of sites globally continue to use a no-longer-secure Secure Hashing Algorithm 1 (SHA-1) SSL certificate. That’s true even though major browser companies – including Apple, Google, Mozilla, and Microsoft – stated that they would not support these certs starting in February 2017.

 

What exactly does that mean? Well, first, it should be understood that February 2017 is not a deadline to change these certificates. The deadline is today – SHA-1 is no longer secure.

 

However, just for further motivation, these are the typical messages and signs a user will see (with variations dependent on browser) when SHA-1 is officially no longer supported – as indicated by Help Net Security on November 21, 2016:

 

  • Crossed out lock icon and https (in address bar);
  • “Privacy error”;
  • “Your connection is not private”;
  • “Attackers might be trying to steal your information from Your Site in Bold (for example, passwords, messages, or credit cards).”

 

All of these warnings are traffic disruptions, which translates into a threat to your profits. When users see warnings like these, they will go to a competitor. They won’t see the comforting and recognizable padlock. In fact, the site could even become inaccessible.

 

Should you be very afraid of SHA-1?

Now, really, if you do think you might still have an SHA-1 SSL cert in place, it should motivate you that your site is currently not considered secure and that changing the cert to an affordable, easy-to-install SHA-2 cert is urgent and follows best-practices. However, it should further motivate you that you’ll be advertised by your users’ own software (the browser) that your site is no longer secure.

 

Regardless of whether you are convinced this SSL switcheroo is necessary, the end result, since not everyone will be informed, is problems. SHA-1-retaining sites will suffer huge hits to user experience (UX) and ballooning of support calls, along with potentially substantial losses in revenue and credibility.

 

Venafi’s cloud services manager Walter Goulet noted that the big, high-traffic sites have left for the security New World of SHA-2, but many sites are still using SHA-1. “According to Netcraft’s September 2016 Web Server Survey, there are over 173 million active websites on the Internet,” he said. “Extrapolating from our results, as many as 61 million websites may still be using SHA-1 certificates.”

 

That’s the exposure, but what’s the specific threat? Hackers can potentially crack Secure Hashing Algorithm 1, rendering it useless – in other words, open access to data. Gordon E. Moore’s theory on the speed of data growth, Moore’s Law, says that overall processing power for computers will double every two years. Electronic Frontier Foundation Board Member Bruce Schneier has framed this issue in terms of dollars on his blog:

  • It takes 2^74 processing cycles to hack the SHA-1 algorithm with the strongest tools available. Those cycles can be converted into time.
  • The approximate cost would be $2.77 million to use public cloud to brute-force-attack SHA-1. That’s not really a lot, depending on the target – and the number is falling fast.
  • The expectation is that it could cost just $43,000 to run a hack of SHA-1 by 2021. Even at that point, to just methodically run through the numbers for a successful hack, it would take 7 years.
  • While seven years may seem like a mini-eternity (well, it’s half a dog’s life), the issue is one of scale. Stronger, better-future-proofed algorithms such as SHA-2, SHA-3, and AES256 can take centuries or millennia to hack. A cackling evildoer might put together a slave botnet of computing power that would help him/her run that algorithm much more quickly, perhaps in less than a month for the right price. “That is precisely what the American NSA, the British GCHQ, and the Chinese military are doing now,” advised PCrisk on November 21, 2016. “Hence there is some risk.”

 

Must-know info on the various SHA types

 

Secure Hashing Algorithm 1 (SHA-1) is an encryption algorithm – in other words, a set of steps a computer takes to scramble and thus conceal information. It encrypts data going in and out of a site that’s enabled for HTTPS protocol by an SSL certificate.

 

So far, so good, right? Well, SHA-1 means well. However, it has known vulnerabilities. SHA-2 and SHA-3 are taking its place. As indicated above, SHA-1 will no longer be accepted by major browsers from February 2017 forward; and it is not currently considered to abide by security best-practices today – accelerating the drive to next-gen SHA-2 SSL certificates.

 

The fact is that this transition away from SHA-1 has been a long time coming but never completely caught on. Part of the difficulty with upgrading was that SHA-1 was the most commonly used hash, until recently lacking support by a vast range of devices and software. In fact, the NSA-devised SHA-1 hash is more than two decades old, first issued as a standard by the federal government in 1995.

 

SHA-2 is not exactly brand-new. It became the hashing standard all the way back in 2002. To understand the improved complexity of SHA-2, it’s actually sometimes considered a family of hashes because of its various bit sizes – especially 224, 256, 384, and 512. So, SHA-2 is not a set number of bits, explained security architect Roger A. Grimes in InfoWorld, but the overwhelming majority of certs in this category have a 256-bit type. “Although SHA-2 is constantly attacked and minor weaknesses are noted, in crypto-speak, it’s considered ‘strong,’” he said. “Without question, it’s way better than SHA-1, which experts believe will be fallible in the near term.”

 

Why are we hitting the SHA-2 migration PANIC-BUTTON?

Grimes was a bellwether for moving to SHA-2 back in January 2015. He said at the time that the challenge of migrating to the new hash would be figuring out which devices and programs work with it. To jumpstart this process, create an inventory of all devices, operating systems, and apps that must support SHA-2. Test that a system does work. Don’t assume that vendor attestations will be accurate.

 

“Upgrading your applications and devices will not be trivial and probably take longer than you think,” said Grimes. “Migrating from SHA-1 to SHA-2 isn’t hard technically, but it’s a massive logistical change with tons of repercussions and requires lots of testing.” Your internal public key infrastructure (PKI) should be updated to support SHA-2 also.

 

***

Are you concerned about the topics discussed in this article? At Total Server Solutions, we offer premium, name brand certificates from market leader Symantec. Upgrade today to SHA-2 SSL.

Posted by & filed under List Posts.

How important is Black Friday to your e-commerce site? Forget the hype. It’s clear from the statistics that this day and weekend are a huge boom for the economy, both online and in-person. Let’s take a look at basic information about Black Friday; sales statistics for the day itself and for Cyber Monday; and tips to prepare your site for a huge spike in activity.

 

  • What and when is Black Friday?
  • Funny, ignoble history of Black Friday
  • Holiday e-commerce sales trending
  • 5 tips to get ready for Black Friday & Cyber Monday

Cloud-Night 

 

What and when is Black Friday?

Before we get into the stats, let’s talk about the basics. Black Friday is the name given to a huge shopping day in the United States. It comes directly after Thanksgiving, which is always the fourth Thursday in November. It also comes three days before Cyber Monday – another big sales day specifically geared toward e-commerce stores. Black Friday and Cyber Monday dates for 2010 through 2020 are as follows, per Timeanddate.com:

  • 2010 – Friday, November 26; Monday, November 29
  • 2011 – Friday, November 25; Monday, November 28
  • 2012 – Friday, November 23; Monday, November 26
  • 2013 – Friday, November 29; Monday, December 2
  • 2014 – Friday, November 28; Monday, December 1
  • 2015 – Friday, November 27; Monday, November 30
  • 2016 – Friday, November 25; Monday, November 28
  • 2017 – Friday, November 24; Monday, November 27
  • 2018 – Friday, November 23; Monday, November 26
  • 2019 – Friday, November 29; Monday, December 2
  • 2020 – Friday, November 27; Monday, November 30.

 

I would be remiss if I didn’t use the Timeanddate Black Friday tool’s dropdown option to see when this holiday will occur in the year 3950:

  • 3950 – Friday, November 24.

Please mark your 40th century calendars.

 

Funny, ignoble history of Black Friday

Everyone hopes that their company makes out well on Black Friday, but the history is actually amusing and not as pleasant as you might think. There’s this idea that Black Friday grew out of companies going “into the black,” that their revenue was boosted into positive territory as they approached the end of the year. That sounds nice, but it isn’t the true origins!

 

The truth about Black Friday is that it was named by Philadelphia police officers in the 1950s. They used the term to refer to the same day, but the term “black” was meant to describe the dismal nature of the day – similar to a black eye. It was a reference to the huge crowds that rushed into the city for the annual Army-Navy game. “Not only would Philly cops not be able to take the day off, but they would have to work extra-long shifts dealing with the additional crowds and traffic,” notes Sarah Pruitt on History.com.

 

In any event, the holiday now means something different than it did at its inception – but it remains one of the most important shopping days of the year. Let’s get on to the sales statistics and tips for improving your success.

 

Holiday e-commerce sales trending

Again, this day usually represent more sales volume than any other day of the year. Money is flowing. Here are some kind of mind-bending statistics related to how huge a day it is for both Internet and brick-and-mortar retailers:

 

Big portion of retail sales – The period between Black Friday and Christmas accounts for about 30% of annual retail sales. It’s particularly high for certain types of retail – such as jewelry, which does almost 40% of sales in that window.

 

Many people active shopping – A well-known poll from the National Retail Federation (NRF) shows these numbers for Black Friday from 2011-2015:

  • 2011 – 85 million
  • 2012 – 89 million
  • 2013 – 92 million
  • 2014 – 87 million
  • 2015 – 74 million.

 

People are moving online – That little dip above is where these statistics really get interesting. Those numbers above show how big a day Black Friday is but are actually specific to brick-and-mortar. As you can see, the number of people shopping in physical stores between 2013 and 2015 went down 19.6%. Those people were going online. There were $2.72 billion worth of e-commerce transactions on Black Friday, a 14% rise from 2014.

 

Black Friday is huge, and if you think it’s shrinking, it’s not. It’s going online.

 

5 tips to get ready for Black Friday & Cyber Monday

 

How do you get ready for this potentially incredible sales weekend?

 

  1. Think like a customer by focusing on usability.

 How user-friendly is your site? That question is answered with usability testing. Basic components of a usability test are:

  • Navigation – Assess your site to see how intuitively someone can move around and explore, through widgets, menus, etc.
  • Content/text – Make sure there aren’t any instances in which your written copy is confusing.
  • Visual coherence – The headers and the text should all work together meaningfully. Colors and fonts should harmonize with one another.
  • Performance – You need a high performance infrastructure, optimized media, and to otherwise set yourself up for reliability and speed.
  • Support – It should be easy and obvious how someone can reach you when they need assistance.

 

“User friendliness can have a significant impact on retention of visitors and the rate of their conversion into customers,” notes Mike Azevedo, CEO of database company Clustrix, in Entrepreneur. “[I]t’s crucial to create a positive experience for visitors.”

 

  1. Roll out the red carpet for mobile.

People spent $42.1 billion on mobile devices in 2013, a figure expected to reach $132.7 billion by 2018. Consumers go from one screen to another during their days. You want a shopping cart that is not just responsive but syncs throughout any customer’s devices. When your site is mobile-friendly, people keep shopping.

 

  1. Personalize your promo.

You obviously want your site to have its own personality, but recognize that behemoths such as eBay and Amazon succeeded with data personalization and marketing to the individual. What can be learned from the .com household names? It’s beneficial (hopefully mutually so) to gather consumer data from various touchpoints and customize content to fit the viewer. To filter users or prospects, use location (ZIP codes) and personal characteristics (sex, age, and similar criteria).

 

Keep in mind, personalization doesn’t mean you’re getting as granular as a unique experience for each user. Azevedo notes that you could just split your customers to receive two different offerings, Promo A and Promo B. “By using visitor data,” he says, “an ecommerce site can not only provide a personalized shopping experience but also increase the chances that customers will ultimately purchase.”

 

  1. Get your social working for you.

You want to think carefully about how you use social, such as how you use images to tell your company’s story, the personality you create around your business, and what hashtags you use. However, you also want to remember strong options such as email newsletters and PPC ads to let people know what you have available for Black Friday and/or Cyber Monday (or Black Friday through Cyber Monday – a relaxed sales bonanza weekend?).

 

You want a specific person in charge of responding to social media comments throughout this massive sales weekend. Complaints are a good thing, because they point you to what needs to be fixed for them and others. Plus, you want to know what people are saying about you and your competitors so that you can monitor and protect your reputation.

 

  1. Think scalability/elasticity from day one.

Azevedo recommends choosing a database that is built with scalability as a top priority throughout development. He suggests that you will be better able to grow if you know your database can expand linearly, processing information and updating seamlessly as more users and devices connect with the network and do business with you.

 

The issue of scalability is one of the primary reasons that cloud computing has become so popular. Being able to access resources based on your fluctuating needs is critical for e-commerce, not just for growth but for expanding and contracting through busy seasons. This scalability that is so essential to the kind of tipping-point growth that every business wants to achieve is inherent in the architecture of a distributed, virtualized cloud.

 

Arnon Rotem-Gal-Oz notes on Stack Overflow that elasticity – being able to adjust resources dynamically – is also a central characteristic of a strong cloud infrastructure. “[W]hen load increase you scale by adding more resources and when demand wanes you shrink back and remove unneeded resources,” says Rotem-Gal-Oz. “Elasticity is mostly important in [a] Cloud environment where you pay-per-use and don’t want to pay for resources you do not currently need on the one hand, and want to meet rising demand when needed on the other hand.”

 

*****

If you want strong results for Black Friday and Cyber Monday, you absolutely have to be fast and efficient. Meeting those objectives will always be directly related to the quality of your infrastructure and your ability to scale and elastically respond to demand. At Total Server Solutions, our SSD-based cloud hosting boasts the highest levels of performance in the industry. Learn more.

Posted by & filed under List Posts.

<<<< Part 1 <<<<

 

In Part 1 of this piece, we essentially talked about why the speed of high performance infrastructure is important, tools to quickly test your site, and how a faster site specifically boosts revenue. Now let’s discuss steps you can take – beyond infrastructure – to accelerate your site, followed by another reason you need strong, reliable hardware: business continuity.

 

  • Beyond infrastructure, how do you get fast?
  • High performance infrastructure: key to business continuity
  • HA as fundamental to high performance
  • Vow to be redundant
  • Is your load imbalanced?
  • Um, did we mention CDNs or the cloud?

 

microcloud-copy

Beyond infrastructure, how do you get fast?

Performance must be considered from multiple angles. Along with internally implementing or working with a web host that has high performance infrastructure, here are a few additional steps you can take to get your site moving, highlighted by Sherice Jacob on the Kissmetrics blog.

  1. Tell your site to gzip it. Many web thought-leaders recommend compressing responses using this common method. “Compression reduces response times by reducing the size of the HTTP response,” notes Chris Coyier of CSS-Tricks. “Gzip is the most popular and effective compression method currently available and generally reduces the response size by about 70%.”
  2. Quarantine your stylesheets. You want JavaScript and CSS sectioned off in their own files, so that they only load once per user.
  3. Crunch your images. You can slim your images with the “Save for Web” feature in Fireworks and PhotoShop. If you are graphically underendowed, you can go to it.
  4. Don’t expect HTML to do the heavy lifting. HTML allows you to adjust size once you have something uploaded (as through the WordPress UI). Bear in mind that the browser still loads the image at full size prior to resizing it, though.
  5. Cache yourself. WordPress and other CMS platforms have caching plugins that set aside the latest version of your site so that the page doesn’t have to populate from scratch with every browser request. WP Super Cache is widely used.
  6. Beware complex detours. You want to retain SEO you’ve built and modify your structure with 301 redirects. However, a jumble of redirects results in latency.

Beyond all the DIY steps you can take above to improve your speed, Jacob also mentions one infrastructural component that she believes is critical: a content delivery network (CDN). “Content Delivery Networks work by serving pages depending on where the user is located,” says Sherice. “Faster access to a server near their geographical area means they get the site to load sooner.”

 

High performance infrastructure: key to business continuity

It’s boring to think about infrastructure for most people, because it sounds like a bunch of machines and wires that are shut away in some warehouse, and that are simply linear conduits for the thought of the animate bipeds. Think about this, though: it is that infrastructure that allows your business to function and operate on a moment-to-moment basis!

 

After all, digital reality doesn’t just connect you with customers through content and e-commerce but with your colleagues. Consider how reliant you are on email and project management or other collaborative software.

 

Since we have become so dependent on these tools in an effort to increase efficiency, the high availability (HA) that is inherent in high performance infrastructure becomes a central concern.

 

HA as fundamental to high performance

HA isn’t optional but necessary if you want to maintain business continuity in a well-integrated, connected company. In other words, you need your infrastructure to suffer very little downtime.

 

A sound high-availability strategy “detects points of failure that can potentially cause the downtime and mitigates failure by distributing the load and traffic across the infrastructure,” notes TechAcute. “In the event of failure, a high availability infrastructure will have failover and recovery mechanisms.”

 

There are numerous reasons why you might experience downtime, because of failures in different parts of your system, such as:

  • Hardware;
  • Operations; or
  • Internal programs.

 

Your downtime could be because of interaction with your website from customers as well. You might have a spike and go down if your server isn’t prepared for Black Friday, for instance. After all, 30% of retail sales each year occur between Black Friday and Christmas, according to Kimberly Amadeo in The Balance. Let’s put that into perspective: If that period of time were even with the rest of the year, it would represent about 8% of sales. The actual 30% results are 3.75 times beyond the 8% expectations, so that’s a 275% increase in expected sales for the average e-commerce site. In the Black Friday economy, 100 lava lamps translate into 375 lava lamps. In an infrastructure that isn’t high-performance and readily scalable, you can’t keep up with that pace.

 

Your site might also become unreliable because you get hit with a DDoS attack or experience other hacking activity. In other words, security is an element that must be built into a high performance infrastructure.

 

Why is avoiding downtime so important, whether it’s caused by a flood of real or phantom traffic? “Aside from loss of potential sales, customers might not trust your brand or business in the future,” explains TechAcute. “Similarly, a business using an enterprise platform to manage its resources will compromise the integrity of internal communications.”

 

One other aspect of the availability that you achieve with a high performance infrastructure is that you are able to meet the expectations of the service-level agreements you hand to your customers. That’s just one more reason you never want your infrastructure to be the weakest link.

 

Vow to be redundant

If you’re writing an English paper, it’s fair of the professor to dock you for being redundant – because in that context, repetition isn’t appreciated. However, you want repetition, i.e. redundancies, and failovers in your infrastructure so that the system has alternatives when parts malfunction.

 

Redundancy is having extra components available in the case a component fails,” notes Brian Heder in Network World. “Failover is the mechanism, be it automatic or manual, for bringing up a contingent operational plan.”

 

These two elements must be considered in the interest of the HA you achieve with a high performance infrastructure.

 

Is your load imbalanced?

When you talk about using a system that has strong load balancing, that means that you are using the simple tactic of distribution to your advantage. By distributing systems, you build a huge amount of redundancy into the system, but you really want to make sure your load is balanced across all your hardware.

 

“Cheap datacenter hosting will not accommodate a surge of users or other factors that can put a heavy strain on the servers,” says TechAcute. “Overload in the servers will cause an online service to go down.”

 

Load balancing means that your traffic is evenly running through various servers, so that your system is naturally more available and can maintain great speed.

 

Um, did we mention CDNs or the cloud?

As indicated by Jacob above, a CDN can be powerful in delivering high availability: it limits the distance between any user by utilizing datacenters in broad geographical locations, improving how quickly a page loads case-by-case.

 

To optimize high availability, it’s best to pair a CDN with the cloud. “Cloud platforms are perhaps the most cost-efficient solution in bringing about high availability,” advises TechAcute, “because your business does not have to invest in the capital expenditure required to purchase, run and maintain hardware.”

 

High performance infrastructure can increase your productivity and revenue, as well as maintain the trust and credibility of your brand. At Total Server Solutions, our high performance SSD-based cloud and CDN grow with you. See how.

Posted by & filed under List Posts.

Do the poor page load times of your website effectively hold your business down – unable to deliver strong user experience, attract high search rankings, and grow? Specifically, how does poor page loading cut into your revenue? Speed up your website to regain control of your upward trajectory. Moving to a high performance infrastructure is one essential step in the process. **WARNING: This piece contains a major potential time-waster.

 

  • Why should your website be as fast as possible? (Stats)
  • High performance infrastructure “hidden” from PageSpeed tool
  • Other handy page load tools
  • More ecommerce sales with a faster site? Yes.
  • SSAE-16-Type-2-audited high performance infrastructure
atl03-subzero-2

 

Why should your website be as fast as possible? (Stats)

There are plenty of studies out there that indicate how critical speed is for the average user. Two of the most eye-opening studies were published a few years ago, as detailed in Econsultancy. Each of them has been circulated heavily ever since (perhaps qualifying as “classic IT market research” given their continuing relevance to understanding user behavior):

 

  1. Forrester Consulting, survey of 1048 online shoppers, 2009
  2. “Why Web Performance Matters,” interviews of 1500 consumers, 2010.

 

Here are some of the most interesting statistics from the two studies, highlighted in tandem by Kissmetrics “Minister of Propaganda” Sean Work in 2011:

  • Nearly three-quarters (73%) of people who regularly surf the Web on smartphones or tablets say that they have come across sites that had unacceptably slow page load times.
  • Just over half (51%) of people who access via mobile say they have either had an experience with an error message due to slowness, or have experienced a site freezing or crashing.
  • Well over a third of shoppers say that they came across a site that they could not reach (38%).
  • Nearly half of consumers (47%) say that they think a website should load within 2 seconds (what would make them happy), and two in five say they will leave if it doesn’t load within 3 seconds (goodbye, customers).
  • If your page load times become 1 second slower, you will see your conversion rate drop as much as 7%.
  • In terms of actual dollars, how much does one second of slower loading cost you? Just as an example, if your site generates $100,000 per day, one second of additional load time means you could be leaving behind $2.5 million of revenue annually.

 

Although these studies are both a few years old, the fact is, human behavior hasn’t changed all that much since the dawn of the information era, the early 1990s. Just look at the 1993 book referenced by Website Magazine in 2014. Penned by IT design consultant Jacob Nielsen, Usability Engineering suggests that tiny slices of time have major impacts on user perception. There are three time limits listed by Nielsen that relate to UX, in terms of basic psychology. “If the application responds instantaneously to the user’s actions, it gives an appearance of direct manipulation,” he wrote – referring to a limit of 0.1 seconds. “This phenomenon of direct manipulation is a great key to increase user engagement.” If loading instead takes 1 second, even at that point the person becomes more aware that the system is in control rather than them; they will have a second to think, but won’t become immediately disengaged.

 

Keep in mind, in today’s world, one second could be considered an eternity. In fact, the New York Times reported in 2012 on Google findings that even 400 milliseconds is too long for users.

 

Why were so many analyses being conducted on speed between 2009 and 2012? It was top news: Google officially announced that they were building speed into their algorithm as a determining SEO factor in 2010. Roger Dooley posited in Forbes, “While [Google SEO quality chief Matt] Cutts noted at the time that initially only a small percentage of sites would see a significant change in ranking or traffic due to page speed factors, I find it likely that the emphasis will increase over time.”

 

Dooley suggests that Google typically does not want to swing things drastically and suddenly in a way that makes it difficult for credible websites to be able to keep pace. However, he thinks the statement indicates that high performance infrastructure would be increasingly represented by the top results.

 

Interestingly, Dooley also thinks that the 2010 “Why Web Performance Matters” study – the one that interviewed 1500 consumers about Internet speed and that listed 2 seconds as the expected load time – had too high of timeframes. As indicated above, Google agrees, and even Nielsen does from 1993.

 

We can debate what the specific period of time is that a user will stick around, and obsess over that hard number, but the most important fact to take away from all these studies is that: 1. Potential buyers have time expectations; and 2. They will leave if those expectations aren’t met.

 

High performance infrastructure “hidden” from PageSpeed tool

 As everyone knows, one of the most important tools out there, that’s very widely used, is Google’s PageSpeed Insights. Here’s a good pro tip: Dooley advises that PageSpeed does not include the speed of your network in its number. That means you could have a great score with that tool, but actually be suffering in search nonetheless because of factors having to do with your network connection and server. In other words, it’s an intentional blind-spot of that tool that could lead to many ecommerce companies and others feeling overly confident about their speed.

 

Other handy page load tools

Two other tools, offering different information related to page speed, are ones that offer:

  1. Geographical diversity: Using the Neustar Website Load Testing Platform (which you can access via a free 30-day trial), you can look at load times from different locations worldwide – so you can get a sense of geographical locations where user experience is stronger and weaker.
  2. Direct comparison: To look at it from a different angle, there’s a tool out there called (**Warning: major potential time waster directly follows.) Which Loads Faster? that allows you to basically race sites against one another. You can check the load time once or multiple times per site. The average milliseconds per page load is then listed, and it will tell you how many times faster the winning site is than the slower one.

 

More ecommerce sales with a faster site? Yes.

The various studies listed above were building on similar research from IT research firm the Aberdeen Group that was conducted in 2008 and republished by popular demand in 2015. “A 1-second delay in page load time equals 11% fewer page views, a 16% decrease in customer satisfaction, and 7% loss in conversions,” reported the Aberdeen researchers.

 

To better understand the need for speed directly in terms of ecommerce, let’s look again at the two studies we initially discussed that were featured in Econsultancy/Kissmetrics from a different perspective. Blogging and conversion author Sherice Jacob, in her analysis of these studies, focuses on different statistics from the studies than those mentioned above. She cites these two stats:

 

  1. More than three-quarters (79%) of visitors say that if an online store is slow to load, they won’t come back.
  2. Close to half (44%) say that they would mention an instance of annoyingly slow ecommerce performance to a friend.

 

“This means you’re not just losing conversions from visitors currently on your site, but that loss is magnified to their friends and colleagues as well,” notes Jacob. “The end result – lots of potential sales down the drain because of a few seconds difference.”

 

SSAE-16-Type-2-audited high performance infrastructure

We will look at some specific ways to speed up your site in the second part of this series (linked below), but one central focus must be your server and network. At Total Server Solutions, we offer an array of high performance infrastructure solutions, backed by our world-class technicians. Let’s do this!

 

 

Posted by & filed under List Posts.

 

 

Very recently, a critical vulnerability has been discovered in virtually all versions of the Linux operating system and is actively being exploited in the wild.  This vulnerability is about nine years old, but only now has it been exposed and, in some instances, exploited.  Dubbed “Dirty COW,” the Linux kernel security flaw (CVE-2016-5195) is a mere privilege-escalation vulnerability, but researchers are taking it extremely seriously due to a few important factors.

dirty-cow

 

First, it’s very easy to develop exploits that work reliably. Secondly, the Dirty COW flaw exists in a section of the Linux kernel, which is a part of virtually every distribution of widely used open-source operating system, including RedHat, Debian, and Ubuntu, which have been in use for almost a decade.

 

Most importantly, the researchers have discovered attack code that indicates the Dirty COW vulnerability is being actively exploited in the wild.  Dirty COW potentially allows any installed malicious app to gain administrative (root-level) access to a device and completely hijack it.

 

Why is the Flaw called Dirty COW?

The bug, marked as “High” priority, gets its name from the copy-on-write (COW) mechanism within the Linux kernel, which is so broken that any application or malicious program can tamper with read-only, root-owned executable files and setuid executables.

 

The notification regarding Dirty COW on RedHat’s bug tracker states:

“A race condition was found in the way the Linux kernel’s memory subsystem handled the copy-on-write (COW) breakage of private read-only memory mappings,” reads the website dedicated to Dirty COW. “An unprivileged local user could use this flaw to gain write access to otherwise read-only memory mappings and thus increase their privileges on the system.”

 

(https://bugzilla.redhat.com/show_bug.cgi?id=1384344)

 

The Dirty COW vulnerability has been present in the Linux kernel since version 2.6.22 in 2007, and is also believed to be present in Android, which is powered by the Linux kernel.

 

All servers that utilize our server management plans include ksplice/kernelcare.  This feature automatically updates the server system kernel without reboots.  We’re always trying to be proactive, especially with regards to security.  With that in mind, we wanted to let you know that your server has already been patched and is not vulnerable as a result of this bug.

 

However, if you currently utilize CentOS 5 / RedHat 5, you should be aware that no further updates will occur after April 1, 2017 due to CentOS 5 / Red Hat 5 reaching their End Of Life (EOL).  We highly recommend ALL customers currently using CentOS 5 / Red Hat 5 to update as soon as possible.  Our sales team will be able to help you explore options to move past those two soon to be obsolete OS options.

 

As always, if you have any questions, please contact us, we’re always available to help.