Split Testing E-Commerce Revenue

Posted by & filed under List Posts.

Sometimes it can be difficult to figure out exactly what it is that is making your company’s growth plateau or shrink. In fact, it is often challenging to even perceive some potential culprits because they seem so fundamentally beneficial. Nonetheless, it is important to ask hard questions – and, in so doing, put different aspects of your company under a microscope – if you want to grow. (For example, have you truly adopted high-performance hosting so that your infrastructure is furthering UX?)


In that spirit, here’s a question: Is it possible that split testing (or A/B testing) could be hurting your e-commerce revenue? Clearly, the concept behind split testing is a sound one: by presenting different versions of a page being shown to a random portion of your audience, you should be able to determine which one is preferable based on how well each version is able to turn visitors of the site into users or customers. This method has even somewhat controversially been used by major newspapers to split-test headlines, driving more traffic to news stories to keep the outfits prominent in the digital era.


A/B testing seems to be a smart way to better understand how your prospects and users make decisions; so how could it hurt your revenue? Online growth specialist Sherice Jacob notes that the trusted, somewhat standardized practice often does not deliver the results that business owners and executives expect. Jacob points out that this form of digital analysis, somewhat bizarrely, “could be the very issue that’s causing even the best-planned campaign to fall on its face.”


In a way, though, it’s not bizarre. Thoughtful business decisions often have unexpected results. (Anything can be done well or poorly – such as your choice of host, which will determine whether your infrastructure is secure. Failure to look for SSAE-16 auditing is an example of a mistake made when picking a web host.) What mistakes can be made when split testing? How and why does it fail? Let’s take a look.


  • How many tails do you have?
  • The magic of split testing: Is it all an illusion?
  • Getting granular – 6 key questions for core hypotheses
  • SEO hit #1 – failure to set canonicals
  • SEO hit #2 – failure to delete the losing option
  • Results from your e-commerce hosting


How many tails do you have?


Analytics company SumAll put two copies of a page that were identical – with no differences whatsoever – into one of the most well-known split-testing tools, Optimizely. Option A beat option B by almost 20%. Optimizely fixed that particular issue; nonetheless, it does reveal how misleading the output from these experiments can be. Think, after all, if those pages had just one minor difference. You would then confidently assume that A was the better choice, and feel backed up by the software’s numbers.


The reason that an issue such as this might arise with A/B testing is due, fundamentally, to the design approach taken for the algorithms built into the program. These approaches are categorized as one-tailed and two-tailed. One-tailed tests are simply trying to find a positive connection. It’s a black-and-white solution. With just one tail, your weakness is the statistical blind spots, says Jacob. Two-tailed testing looks from two different angles at these e-commerce outcomes.


The distinction made by the UCLA Institute for Digital Research and Education helps to clarify:


  • One-tailed – Testing that is based on determining whether there is a relationship from a single direction “and completely disregarding the possibility of a relationship in the other direction.”
  • Two-tailed – No matter which direction you use to address the relationship, “you are testing for the possibility of the relationship in both directions.”


The magic of split-testing: Is it all an illusion?


In 2014, conversion optimization firm Qubit published a white paper by Martin Goodson with the shocking title, “Most Winning A/B Test Results are Illusory.” In the report, Goodson presents evidence that shows that poorly performed split testing is actually more likely to lead to false conclusions than true ones – and, well, um, bad information should not be integrated into e-commerce strategy.


The crux of Goodson’s argument comes down to the concept of statistical power – which can be understood by thinking of a project in which you want the find out height differences between men and women. Only measuring one member of each sex would not give you a very broad set of data. Using a larger population of men and women, getting a large set of heights by measuring a lot of people, will mean that the average height will stabilize and the actual difference will be better revealed. As your sample size grows, you access greater statistical power.


To get back to the notion of split-testing, let’s say that you have two variants of the site you want to assess. Group A sees the site with a special offer. Group B sees the sites without it. You simply want to calculate the difference in response based on the presence of the offer. The difference between the two results should be considered in light of the statistical power (amount of traffic).


What is the significance of statistical power? Knowing what you want in a sample size (volume of traffic) will ensure that you don’t stop the testing before you have collected enough data. It is easy to stop and see false positives that lead you in the wrong direction.


Goodson says to think of a scenario in which two months would give you enough statistical power for results to be reliable. A company wants the answer right away, so they test for just two weeks. What is the impact? “Almost two-thirds of winning tests will be completely bogus,” he says. “Don’t be surprised if revenues stay flat or even go down after implementing a few tests like these.”


Getting granular – 6 key questions for core hypotheses


You want results that are meaningful from these tests. Otherwise, why bother? Think in terms of possible sources of confusion or frustration for visitors, either at the level of the hook or within the funnel, advises Qualaroo CEO Sean Ellis. Get this information directly from users via surveys or other comments.


Based on those bits and pieces, come up with a few hypotheses – your hunches about what you can do that might improve the conversion rate or give you better business intelligence. You can see whether or not those hypotheses are correct using the A/B tests, via an organized testing plan. A testing plan will make it much easier to strategize and consistently collect more valuable information.


These 6 questions can guide you as you develop your testing plan, says Ellis:


  1. What is confusing customers?
  2. What is my hypothesis?
  3. Will the test I use influence response?
  4. Can the test be improved in any way?
  5. Is the test reasonable based on my current knowledge?
  6. What amount of time is necessary for this test to be helpful?


That short list of questions can help you become more sophisticated with your A/B testing to avoid false positives and use the method in its full glory.


SEO hit #1 – failure to set canonicals


Split testing can hurt your SEO also. You need to set canonical URLs for each page because these two almost identical versions of the same page cause confusion for search engines.


SEO hit #2 – failure to delete the losing option


Another issue for SEO (and in turn for your revenue, if not your conversion) that can be caused by A/B testing is when you do not delete the page that loses in the comparison. That’s particularly important if you’ve been testing the choices for a while – since that generally means that the search engines will have indexed it.


“Deleting it does not delete it from search results,” notes Tom Ewer via Elegant Themes, “so it’s quite possible that a user could find the page in a search, click it, and receive a 404 error.”


Results from your e-commerce hosting


Just as you want to see impressive results (and not a downturn) from your split-testing, you want your hosting to be working in your favor as well – and for online sales, security and performance are fundamental. At Total Server Solutions, compliance with SSAE 16 is your assurance that we provide the best environmental and security controls for data & equipment residing in our facilities. See our high-performance plans.

How to Set Up a Non-Blog WordPress Site

Posted by & filed under List Posts.

While WordPress excels as a blogging platform because that was its original intended function, it has become increasingly sophisticated as a general tool to build websites. You could create an e-commerce shop, a portfolio, or a business site in this way.


Note that you could easily include (either upfront or at whatever point) a blog with that site if you want – as indicated below with the discussion of a page for posts. The blog does not have to be the defining centerpiece of your site, though.


Here is how you would go about setting up your WordPress as a static site, drawing a lot of ideas from WordPress themes and plugin company DesignWall.


  • What exactly is a static site (vs. a dynamic one)?
  • How to set the homepage of your static WordPress site
  • Creating the menus of your site
  • How to make your non-blog WordPress site stand out
  • The option of doing posts within their own page
  • Great hosting for strong WordPress UX


What exactly is a static site (vs. a dynamic one)?


A static site will have a homepage that does not change, no matter what new content you have go up elsewhere. That is in contrast to a dynamic site, which would be changing as you add new material – displaying the most recent posts from your blog. (Note that to be completely technically accurate, your site will remain dynamic if you use WordPress as its basis no matter what; however, you are essentially giving your site a static face regardless of its specific designation from a technical standpoint.)


The homepage will always use the same exact page – so let’s talk about that aspect.


How to set the homepage of your static WordPress site


You will be able to move forward with establishing this page whether or not you are just getting started with a new installation. Don’t worry about exactly what you want to say. You are able to create the page and then go back into it to figure out exactly what your message will be. Just follow these 6 steps:


  1. 1. Log into your WP admin account
  2. 2. Click on Pages in the left-hand sidebar, and select Add New Page.
  3. 3. Give it the simple name “Homepage” for now (which can be changed later).
  4. 4. Your theme may give you the option to turn off Comments and Pingbacks, typically both listed under “Discussion.” If those options are not available there, you will see them as small checkboxes on each page in the upper-right-hand corner above where it says, “Publish.”
  5. 5. To test and *go live with this page*, go into Reading Settings, which is within Settings in the sidebar.
  6. 6. There, you will see Front page displays, and you want that to be “A static page”; to complete this option, select “Homepage” and then Save Changes.
  7. 7. Look at your site, and you should see the Homepage displayed as your homepage.


Creating the menus of your site


It is time to establish menus for your static WordPress site. However, before we move forward with menus, think about what other pages you will need, and go ahead and create draft versions of those. Just create the pages at this point, without concerning yourself about the content. By having these pages at least in very rough place-holding form, you will be able to set up your navigation menu in a more logical and meaningful way.


Go ahead and add those pages the same as you did the Homepage. They can have simple names at this point. Beyond a homepage, here are the “must-have” pages for a 10-page business site, according to custom WordPress theme firm Bourn Creative: About, Services, Products, FAQ, Testimonials, Contact, Privacy Policy, Newsroom, and Portfolio. Adjust as makes sense for your industry and company.


Now go to the left Sidebar, click on Appearance, and select Menus. Here, you will see that you can add any of the pages you just created to your menu, which (depending on your theme) is typically displayed on your main header or in the sidebar.


You can nest whatever of the menu items you want by dragging and dropping it into position.


It is also possible to change names of menu items to whatever you want (without having to rename the linked page). Go into Menu Settings, and you can automatically add pages to the menu if that makes sense to you.


It is not necessarily a good idea to add pages automatically. That’s because you could end up with a lot of clutter. Probably you want certain pages to be especially prominent (e.g. About Us, Products or Services, etc.).


You may also have the option within your theme to change where this menu can be seen on your site.


How to make your non-blog WordPress site stand out


Probably, you do not want a mediocre non-blog WordPress site. You want a great one. Here are some tips on how to make it stronger from Alyssa Gregory on SitePoint:


  1. 1. Choose a strong theme. Gregory notes the importance of the theme in terms of how your content will be displayed. You may not want to have dates in your posts, for instance. Something with a magazine format will typically work well.
  2. 2. Figure out how pages and posts make sense. Gregory also mentions that you do not have to set up a non-blog WordPress site as a series of pages; you can use posts instead. However, using pages is more organized, from her perspective. If you do use posts or are dedicated to that structure for whatever reason, her advice would be to stick to it – because trying to create a hierarchy that crosses between pages and posts on a non-blog WP site could quickly get confusing. However, you can really use both in a meaningful way as long as the posts all appear within a certain setting, on their own distinct page (so you have someplace that you’re building content, even if it’s not the basis of the site). See below on that.
  3. 3. Dig into the code. Inevitably, the theme will need a little adjustment “under the hood.” That will allow you to clear out some of the more blog-centered elements that are built into the theme. An example would be when you turn off the ability to comment. You may still have a No Comments line in many themes, but that could be removed at the level of the code. It is usually also a good idea to clear out the RSS subscription option and anything else that is more of a reference to blogging than to a website without the blogging function.


The option of doing posts within their own page


You do not have to have a page for your Posts. However, if you do use the Posts on a non-blog site, you will want to organize them within a page so that the non-blog structure is the basis for everything. Actually, it does not hurt to create this page, says WPSiteBuilding.com, even if you don’t use it at this point. Generally, a blog is considered a good idea for search prominence and general engagement. This page could be called Blog or News or Thoughts or Updates, whatever you want. Just put a title, without anything on it. To test, publish that page.


Great hosting for strong WordPress UX


Are you wanting to deliver the best user experience through your non-blog WordPress site? At Total Server Solutions, we are always working to find the best, most effective ways to serve you and provide solutions to help you meet your challenges. Explore our platform.

Could IoT Botnet Mirai Survive Reboots

Posted by & filed under List Posts.

Mirai has been making a zombie army of swaths of the internet of things, so it is no wonder that manufacturers are taking steps to protect against it. However, one IoT device manufacturer’s failed attempt to protect its products against the botnet (used in massive DDoS attacks) shows how challenging this climate has become. When IoT-maker XiongMai, based in China, attempted to patch its devices so that the malware would be blocked, the result was described as a “terrible job” by security consultant Tony Gee.


Gee explained that he took products from the manufacturer to a trade convention, the Infosecurity Europe Show. The Floureon digital video recorders (DVRs) used in Gee’s demo did not have telnet open on port TCP/23 – but shutting off telnet access was insufficient as a defense.


Gee went through port 9527 via ncat. The passwords matched those of the web interface, and it was possible to open a command shell. Within the command shell, Gee opened a Linux shell and established root access. From the root user position, it was simple to enable telnet.


For devices that have telnet closed down, the device is hackable via shell and restarting the telnet daemon, explained Gee, adding ominously, “And we have Mirai all over again.”


  • Tale of an immortal zombie
  • How could Mirai grow larger?
  • The doom and gloom of Mirai
  • How to protect yourself from DDoS
  • Layers of protections
  • What this all means “on the ground”


Tale of an immortal zombie


Mirai is changing, much to the frustration of those who care about online security. Prior to this point, malware that was infecting IoT devices (such as routers, thermostats, and CCTV cameras) could be cleared away with a reboot.


A method was discovered in June that could be used to remotely access and repair devices that have been enslaved by the botnet. The flip side of this seemingly good news is that the same avenue is a way that a Mirai master can generate reinfection post-reboot – so researchers did not release details.


Notably, BrickerBot and Hajime also have strategies that try to create a persistent, “immortal” botnet.


The researchers did not provide any specific information about the vulnerability out of concern that it would be used by a malicious party. The firm did list numerous other weaknesses that could be exploited by those using the botnet.


How could Mirai grow larger?


What are other possible paths of exploit that would allow Mirai to grow even larger than it is now? Those include:

  • DVR default usernames and passwords that can be incorporated into the worm element of Mirai, which uses brute-force methods through the telnet port (via a list of default administrative login details) to infiltrate new devices.
  • Port 12323, an alternative port used as telnet by some DVR makers in place of the standard one (port 23).
  • Remote shell access, through port 9527, to some manufacturer’s devices through the username “admin” and passwords “[blank]” and “123456.”
  • One DVR company that had passwords that changed every single day (awesome), but published all the passwords within its manual on its site (not awesome).
  • A bug that could be accessed through the device’s web server, accessible through port 80. This firmware-residing buffer overflow bug currently exists in 1 million web-connected DVR devices.
  • Another bug makes it possible to get password hashes from a remote device, using an HTTP exploit called directory traversal.


The doom and gloom of Mirai


The astronomical expansion of Mirai is, at the very least, disconcerting. One recent report highlighted in TechRepublic found that Internet of Things attacks grew 280% during the first six months of 2017. The botnet itself is at approximately 300,000 devices, according to numbers from Embedded Computing Design. That’s the thing: Mirai is not fundamentally about IoT devices being vulnerable to infection. It’s about the result of that infection – the massive DDoS attacks that can be launched against any target.


Let’s get back to that infected and unwitting Frankenstein-ish army of “things” behind the attacks, though – it could grow through changes to the source code by hackers, updating it to include more root login defaults.


The botnet could also become more sophisticated and powerful as malicious parties continue to transform the original so that it has more complex capacities to use in its DDoS efforts. Today it has about 10 vectors or modes of attack when it barrages a target, but other strategies could be added.


How to protect yourself from DDoS


Distributed denial of service attacks from Mirai really are massive. They can push just about any firm off the Internet. Plus, the concern is not just about that single event of being hammered by false requests. Hackers first open up with a toned-down attack, a warning shot that is often not recognized as a pre-DDoS by custom in-house or legacy DDoS mitigation tools (as opposed to a dedicated DDoS mitigation service). These dress-rehearsal attacks, usually measuring under 1 Gbps and lasting 5 minutes or less, are under the radar of many DDoS protection solutions that have settings requiring attack traffic to be more substantial.


When DDoS started more than 20 years ago, engineers would use a null route, or remote trigger blackhole, to push the traffic away from the network and prevent collateral damage to other possible victims.


Next, DDoS mitigation became more sophisticated: traffic identified as problematic on a network was redirected to a DDoS scrubbing service – in which human operators analyzed attack traffic. This process was inefficient and costly. Also, remediation often did not get started right away following detection.


Now, DDoS protection both must be able to “see” a DDoS developing in real-time and have the ability to gauge the DDoS climate for trends, generating proactive steps to mitigate an attack. Enterprise-grade automatic mitigation protects you from these attacks and maintains your reliability.


Layers of protections


There are various levels at which distributed denial of service can be and should be challenged and stopped. First, a DDoS protection service against real and present threats, built by a strong provider, can effectively keep you safe from these attacks – but there are other efforts that can be made as well. Internet service providers (ISPs) can also protect the web by removing attack traffic before it heads back downstream.


Defense should really be at all levels, though. The people who make the pieces of the IoT – software, firmware, and device manufacturers – should build it with protections in place from the start. Installers and system admins should update passwords from the defaults and patch any intrusions as possible.


What this all means “on the ground”


It’s important to recognize that this issue is not just about security firms, device manufacturers, and criminals. It’s also about, really, all of us: the home users of devices, such as the DVR. (If you don’t know, a DVR is a device that records video on a mass storage device such as an SD memory card or USB flash drive… when it isn’t busy being used in botnet attacks).


The home user should be given reasonable security advice. Many users do not respond quickly when new patches are released. IoT devices are often built just strongly enough that they can operate; security is not a priority. That is bad – but that means users need to do their homework on security prior to purchase. They need to change the password from default to complex and randomized ones.


Protect yourself from Mirai


What can you do to keep your business safe from Mirai and other DDoS attacks? At Total Server Solutions, our DDoS mitigation & protection solutions keep your site up and running, your content flowing, and your customers buying, seamlessly. How does it work?

Mirai Botnet Master Bestbuy

Posted by & filed under List Posts.


Anonymity. It is a characteristic that is often not viewed positively. We all want to be recognized for our accomplishments and actions, our most impressive or good-hearted deeds. However, sometimes, we would prefer to remain in the shadows – and that’s especially true for the criminals among us; after all, their identification could lead to jail time and other unwanted consequences.


Well, if anonymity is what you want, you probably should avoid prominence in the DDoS community – or face the wrath of Brian Krebs. Krebs, who specializes in information security, seems to have gotten a knack lately for unmasking malicious online parties. An independent investigative journalist specializing in security, he is probably best known as the guy who was targeted with one of the biggest distributed denial of service (DDoS) events of all time – and responded by following a trail of data crumbs to identify the specific person he believed was responsible for the mega-attack.


Let’s briefly review the initial attack on Krebs (with a massive army of Mirai IoT devices) last September and the revealing of the Mirai author in January. Then we will double back to begin the Bestbuy story in November, when he (Bestbuy = Daniel Kaye) and another hacker (or simply another identity for Kaye himself) started taking control of the botnet. From there we will proceed to the downfall of Bestbuy: his arrest in February. Then we will go over Krebs correct identification of Kaye prior to the release of his name (another victory by Krebs that should be noted); and, finally, the controversial suspended sentence that he received from the German court, the precursor to a trial he is expected to soon face in England.


  • Bestbuy unmask prequel: Anna-Senpai
  • From hacker duel to handcuffs
  • Krebs fingers Kaye
  • How to protect yourself from DDoS


Bestbuy unmask prequel: Anna-Senpai


At approximately 8 pm EST on September 20, 2016, KrebsOnSecurity started getting hit with a blast of bogus traffic that measured at 620 Gigabits per second. Krebs had DDoS protection and his site was not pushed offline; however, it certainly got his attention. It ends up being a kind of battle in a DDoS v. Krebs war. After all, they targeted Krebs, many think, because of a previous event. On September 8, less than two weeks prior to his site being hit, Krebs named two Israeli hackers who were behind a very successful DDoS-as-a-service company that brought in $600,000 over two years; and the two men that he named in that piece (both just 18 years old) were arrested two days later.


Krebs noted that he thought the attack was probably a retaliation against that article, saying that freeapplej4ck was a string contained within some of the POST requests during the DDoS attack. This term was “a reference to the nickname used by one of the vDOS co-owners,” Krebs said.


It certainly seems that those behind this Mirai assault were gluttons for punishment, since Krebs had already proven himself adept at tracking down hackers. Fast-forward to January, and Krebs fingered Paras Jha, Rutgers University student and president of the DDoS mitigation service ProTraf Solutions, as the author of Mirai. (Note that Jha has not been charged with any crimes, as of July 28, per Krebs.)


From hacker duel to handcuffs


The security world became fixated on Mirai following this assault on Krebs, for obvious reasons. In November, Motherboard indicated that the attack on Krebs – followed up by ones on Spotify, Twitter, German ISP Deutsche Telekom, and other major services – was headed for even darker territory. Two hackers, or one with two identities, had created another enormous botnet using a variant of Mirai, and they were offering it as a pay service (similar to vDOS).


One of the two hackers (or the only one, if it is the same person) was better at bragging than he was at spell-checking; after telling Motherboard that he had more than a million hacked IoT devices under his control, he boasted, “The original Mirai was easy to take, like candy from this kids” [sic]. He was referencing the hacker battle to be the new godfather of all these compromised devices. One popular perspective at the time was that the fresh strain was created by a current Mirai botmaster in order to enslave additional devices to its army.


Unfortunately for Bestbuy, law enforcement was soon on his tail. In February, British police arrested a 29-year-old man at a London airport; however, notably, they did not release his name. The arrest was the first one related to Mirai. The German Federal Criminal Police Office (BKA) noted that the 29-year-old was being charged with an attack on Deutsche Telekom – soon after which Kaye/Bestbuy had messaged Motherboard that he was one of the people behind it.


“Bestbuy is down,” concluded Jack B. of the DDoS research collective SpoofIT at the time.


Krebs fingers Kaye


How did Krebs identify Bestbuy? Here are key points made to connect Bestbuy to Kaye:


  • When the Mirai botnet was used to take Deutsche Telekom offline, the registrant for the domain names affiliated with the servers controlling it were “Spider man” and “Peter Parker” (alter-ego of Spider-Man). The street address used for registration was in Israel.
  • The IP that is tied to the botnet that took the German ISP offline was Only nine domains have ever been associated with this IP address. Eight of those domains were related to Mirai. The one that was not was dyndn[dot].com, a site that sold GovRAT, a remote access trojan (RAT) designed to log keystrokes. GovRAT has been used to attack over 100 corporations.
  • GovRAT was offered for sale by a user Spdr, with the email spdr01@gmail.com, on oday[dot]today.
  • Another malware service that was sometimes sold with GovRAT allowed people to fraudulently use code-signing certificates. Within the digital signature for that program was the email parkajackets@gmail.com.
  • The email addresses spdr01@gmail.com and parkajackets@gmail.com were the ones used for the vDOS usernames Bestbuy and Bestbuy2. (Remember Krebs’ article that identified the founders of that Israel-based DDoS-as-a-service ring.)
  • In addition to access from Israel, Bestbuy and Bestbuy2 logged into vDOS from Internet addresses in Hong Kong and the UK. Bestbuy2 actually only existed because the Bestbuy account was canceled for logging in from those international addresses.
  • A key member of the Israel-based IRC chat room and hacker forum Binaryvision.co.il had the email spdr01@gmail.com and was nicknamed spdr01.
  • Binaryvision members told Krebs that spdr01 was about 30; had dual citizenship in the UK and Israel; and was engaged.
  • The Binaryvision users’ social accounts were both connected to a 29-year-old man named Daniel Kaye. Kaye’s Facebook profile had the alias DanielKaye.il (using Israel’s top-level domain) and was engaged to marry a British woman named Catherine. The profile photo is of Hong Kong.
  • Daniel Kaye is listed as the registrant for Cathyjewels[dot]com, and the email address used for that domain was danielkaye02@gmail.com.
  • On Gravatar, the account Spdr01 uses the email address danielkaye02@gmail.com.

Following Krebs’ story, he was proven right: Bestbuy said in court that he was responsible for attacked Deutsche Telekom using Mirai. Then, on July 28, Krebs wrote, “Today, a German court issued a suspended sentence for Kaye, who now faces cybercrime charges in the United Kingdom.” Notably (given the slap on the wrist from Germany), Kaye is expected to be extradited to the UK to face criminal charges there.


How to protect yourself from DDoS


The Mirai botnet is fascinating from the perspective of a mystery or web of information. However, it is not exactly fun to be hit with a massive barrage of bogus requests from an army of zombie routers. Is your company safe from DDoS? At Total Server Solutions, our DDoS mitigation service isolates attack traffic and allows only clean, inbound traffic to pass through to your server. Safeguard your site.

Why You Should and Shouldn't Use Colo

Posted by & filed under List Posts.

What if you could somehow pass on your server room responsibilities to someone else? How would it feel to get access to the network power, performance, and staff of a huge enterprise? If you replied, “That would be awesome,” to either of the above questions, colocation may be the right choice for you. Let’s first explore what this IT approach is and an overview of the current market before looking at key elements within the industry (to better understand what is impacting providers), and a list of reasons companies take this route.


  • Understanding Colocation
  • Changing Elements of the Colocation Industry
  • Top Reasons for Choosing Colocation
  • How to Approach Colocation Smartly
  • Moving Forward


Understanding Colocation


Colocation is leasing space in an outside data center for your servers and storage – with the owner of the facility meeting your needs for a secure physical location and internet connection. Unlike with cloud hosting, all of the hardware in a colo relationship is owned by you. This arrangement is attractive to many companies because of basic economies of scale: you can access a highly skilled staff, improve your bandwidth, bolster your data safety, and access more sophisticated infrastructure. Your bill is basically a combination of rack space and some degree of maintenance (often minimal).


Changing Elements of the Colocation Industry


Colocation providers are entering a trickier landscape as the market gets hotter. Buyer personas are proliferating; sustainability is becoming a greater concern; cloud hosting is on the rise; and computing strategies are becoming increasingly diversified and complex. Just how hot is colocation getting? With a 14.4% CAGR between 2011 and 2016, the industry is a bit steamy. (But don’t worry: enterprise-grade, multiple redundant cooling systems ensure that your hardware will never sizzle.)


Put another way, colocation may not be as trendy a concept as cloud, but the former is more widely used by enterprises than the latter. According to Uptime Institute’s 2017 Data Center Industry Survey, 22% of enterprise IT systems are housed in colocation centers, while 13% are cloud-based. Plus, cloud is growing in line with cloud: according to figures highlighted in Virtualization Review, that 35% external data center total is expected to rise to 50% by 2020.


Related: “How to Use Colocation to Your Advantage”


As the market continues to develop, colocation vendors must have the agility to reshape themselves in response while also looking for ways to build their own business by incorporating breakthrough equipment and strategies, and by continuing to focus on operations, affordability, and performance.


Here is a look at some of the key aspects of the market that are evolving, keeping life interesting for those who work at colocation providers:


Who buys location? In the past, people in facilities or procurement roles would typically be the ones engaging with colocation vendors. Now, though, choices on infrastructure are being handled by a broader group that includes line-of-business and C-level management. Since colocation firms are now interacting with more COOs, CFOs, and heads of business units, it is increasingly important that they are prepared, both from sales and business perspectives, to “talk shop” meaningfully with individuals from a multifarious array of backgrounds.


How is DCIM used? Both internally and as a value-added service, data center infrastructure management (DCIM) software is becoming a more central function in colocation facilities. DCIM bolsters service assurance while leading to better consistency across analytics. It allows companies to convert their data into actionable metrics and gives infrastructure executives insight into speed and reliability throughout the scope of systems, for more accurate, knowledge-driven decisions. These gains lead to a less expensive, more highly available, and more efficient ecosystem.


How is the data center designed? The way that a data center is laid out must make way for cloud hosting, edge computing, and other growing methods. Because methods are in rapid flux, adaptability must be built into architecture. Flexibility makes it possible to pivot to meet different applications and needs. On the flip-side, what colocation centers do not want is minimal service options or stranded capacity. Addressing these issues requires a sustained focus on density and the support of mixed-density rows. Right-sizing can be achieved through modular design so that colocation firms do not overprovision from the outset. These vendors must think about the extent of resiliency that they want to implement and how far to go in that direction – keeping in mind that high resiliency, like high density, is expensive. Additionally, safety must be considered as an element of design, especially since higher density, in and of itself, poses a greater risk to staff.


Top Reasons for Choosing Colocation


The primary reason companies feel hesitant to choose colocation is a sense that they will lose control. Anyone who chooses this route knows they are handing their servers over to someone else.


Well, so then why do people do it? For one thing, yes, you lose day-to-day control of your servers in a physical sense, but you do retain much more control over them than in many hosting scenarios (most notably cloud, since that option is often juxtaposed with colocation). It is still your equipment and your software.


Beyond that, reasons vary. Small and midsize businesses can use it to affordably access a more sophisticated computing environment than they have onsite.


Another key, organization-nonspecific reason that colocation is used comes from Michael Kassner of TechRepublic: “[M]ost managers said their colocated equipment was mission critical, and the colocation providers were able to meet their requirements at a lower cost than if the service was kept in-house.” Sounds simple enough.


Here are a few additional ideas from Susan Adams of Spiceworks on the advantages of entrusting your servers to a colocation facility:


  • Improved physical security (think access logs, cage locks, and cameras)
  • Helpful support (well, if you’ve chosen the right provider)
  • Better uptime, since you’re getting access to cutting-edge uninterruptible power supply (UPS)
  • Better cooling so that your hardware gets better care
  • Scalability, since all you have to do is send the data center more machines
  • Connections with various major ISPs through dedicated fiber.


Colocation is often more cost-effective than using your own datacenter since the amount you get billed is inclusive of HVAC costs and power. “Even without those cost savings, though,” says Adams, “you’re paying for the life-improving peace of mind of an enterprise-quality, stable, and fast data center.”


How to Approach Colocation Smartly


How can you succeed with this infrastructural method? First, be prepared by understanding that you need to start with your own software and servers.


Once you have all your machines ready, Adams advises to start monitoring resource consumption so that you successfully stay within any resource limits related to your plan.


Also, switching from your own datacenter to colo involves various moving parts. Build in extra time, and be prepared for potential snags so that everything proceeds forward smoothly.


Finally, in order to facilitate usability, you want to have a strong connection to the colocation facility – as can be achieved with a Border Gateway Protocol (BGP) circuit and BGP tail.


Moving Forward


Are you considering colocation for your infrastructure? At Total Server Solutions, all of our datacenters are robust, reliable, and ready to meet your challenges. Discover our reach.

How to Use Colocation to Your Advantage

Posted by & filed under List Posts.


Let’s look at the colocation market and a few statistics; talk about why businesses are choosing colocation (i.e., the problems it addresses); and finally, review 10 strategies to select the best colocation provider.


What does the move off-premises look like?


The amount of computing workloads that are handled onsite has hovered at approximately 70% for the last few years, but research suggests cloud and colocation will be responsible for a greater share in the years ahead.


According to the Uptime Institute’s 2016 Data Center Industry Survey, fully half of IT decision-makers predict that most of their computing will eventually occur through a third-party facility. Among those, more than two-thirds (70%) say that they expect off-premise to outdo on-premise by 2020.


A substantial portion of the transition to external providers is headed for public cloud. However, many businesses will also be switching over to colocation, or colo – the rental of space within an external data center for a business’s own servers and hardware. This practice is called “colocation” (co-) because it collaboratively meets the business needs of the client: you provide the servers and storage, while the vendor provides the facility, physical security, climate control, bandwidth, and power.


Colocation vendors have been expanding. That’s evident with statistics from business intelligence firm IBISWorld that reveal a compound annual growth rate of 14.4% from 2011 to 2016 (with a total size of $14 billion).


Why do businesses choose colocation?


Here are some of the most common reasons businesses use colocation, according to senior IT executives:


  • Worldwide growth
  • Challenges related to mergers or acquisitions
  • Migration of systems that are not core business
  • Leadership instructions to move off internal hardware
  • Save the cost of building a new data center
  • Limit churn from noncritical computing into critical systems
  • Use of a different power grid for disaster recovery
  • Unsureness about in-house resources or staff.


Related: “Why Colocation?”


Michael Kassner of TechRepublic lists several other reasons for this practice that get a little more granular:


  • Cost-effectiveness – Because data centers can get volume deals on internet access and bandwidth, you can save on those costs.
  • Security – If an organization does not have an IT staff that has some security expertise, the colocation facility is better for data safety.
  • Redundancy – The amount of backup is expanded in terms of both power and the network. A business might have its own generators in the case of outages for uninterruptible power, but they often will not have diversified their internet connections with various vendors.
  • Simplicity – You own the software and hardware, so the businesses are able to update these components as needed without having to renegotiate with the vendor.
  • No more “noisy neighbors” – If you don’t have guaranteed resources in a VPS or cloud hosting plan, you can end up with other tenants hogging the resources (CPU, disk I/O, bandwidth, etc.), hurting your performance.


10 tips to select a strong colocation vendor


Any company that is using colocation is using some of its budget for data center capacity from an external party. Since that’s the case, they are entitled to expect that their vendors operate with at least as high of standards as they apply in-house. The brokering of services generally has become a more important skill for CIOs; as for colocation, the assessment, contract structuring, and management of these partnerships will become increasingly critical to the success of an IT department.


Here are tactics to make sure colocation works right for you (and you’ll notice that many of these direct questions cover similar ground to the above listed general reasons):


#1. Prioritize physical location. Yes, you want to be able to get to the facility easily for physical access; plus, be aware that data replication and network latency will be simplified and improved by relative proximity.


#2. Confirm third-party verification. You need to know that availability is fundamental to the infrastructure that you’re using. Make sure there is documentation to back up any claims made by the vendor about their ability to meet Statement on Standards for Attestation Engagements No. 16 (SSAE 16) or other key industry standards. If your systems are mission-critical, get evidence from the provider.


#3. Check for redundant connectivity. Note that redundancy is a key reason why colocation is a strong option. You want to make sure of the existence of connectivity backups. Reliability of these internet connections is also crucial.


#4. Look for commitments to security & compliance. Security should be a major concern of any data center, but verifying that commitment is a major concern for you. You also have to check that the vendor meets your regulatory requirements so you are protected and aren’t blindsided by violations.


#5. Review how the vendor will provide support. You need to make sure your needs are met both in terms of the hardware and support, as should be spelled out in the service-level agreement (SLA).


#6. Get a sense of business stability. Matt Stansberry of the Uptime Institute advises looking for a colocation facility that has been running for a number of years, by the same organization, with a consistent group of providers and clients. In other words, you do not want moving pieces but stability. Problems are likelier to arise when the vendor you choose gets acquired by another organization, reinstalls hardware, adjusts its operations, or is consolidating equipment. To gauge this aspect of the business, ask about the data center’s hardware lifespan, occupancy rate, and even employee turnover. Does the average staff member have a long tenure? Why not? And if the hardware is aging, do not be surprised if the firm is gearing up for potentially problematic upgrades.


#7. Assess the scope of services offered. Ideally, the vendor will provide a range of services. It may sound irrelevant to your specific and immediate concerns of getting your equipment colocated. However, diversity of offerings means that you can adjust if your organization’s needs change without having to go through the process of vetting a new provider again.


#8. Make sure that cooling and power are guaranteed. The SLA should ensure that power and backup power will be in place without exception.


#9. Confirm that operations are aligned with your expectations. You are likeliest to experience downtime when errors or oversights are made in operations. You will not always be able to get full paperwork (maintenance records, incident reports, commissioning reports, etc.), but getting what you can will give you a more transparent window into how things run at the vendor.


#10. Generally improve your RFPs and SLAs. Make sure terms are established well within an RFP or SLA. Specific ideas to enhance your effectiveness with these documents from the Uptime Institute Network include: 1.) staying brief (2-3 pages) so that potential vendors don’t feel overwhelmed by a massive document; 2.) remembering that due diligence must occur regardless what brands are currently using the vendor; and, 3.) keeping overprovisioning at bay by questioning hardware faceplate data and assuming excessive impact from an equipment refresh.




Are you looking to make the most of colocation as a strategy for IT at your business? The above considerations can guide you in the right direction. At Total Server Solutions, we meet the parameters of an SSAE-16 Type II audit; but our service is what sets us apart, and it’s our people that make our service great. Download Our Corporate Overview.

Get Started with the Internet of Things

Posted by & filed under List Posts.

Strategizing a conscientious plan will help you launch into the internet of things without any hitches along the way. Here, we look at three methods or best practices that seem to be held in common by the most successful IoT adopters, as indicated by an MIT overview. First, though, we assess statistics on the scope of the IoT and its general business adoption rate.


Is the internet of things on the rise? Well, considering recent IoT market statistics, the answer is a confident “yes”:


  • The total market size of the IoT will increase from $900 million to $3.7 billion between 2015 and 2020 (McKinsey).
  • The number of devices that make up the IoT will expand from an installed base of 15.4 billion to 30.7 billion by 2020, and on to 75.4 billion by 2025 (IHS).
  • IoT hardware, software, and business service providers will have annual earnings greater than $470 billion by 2020 (Bain).
  • Over the next 15 years, the total money that will be injected into the industrial IoT will be more than $60 trillion (General Electric).


Despite these numbers, and even though the internet of things is generally a subject of widespread attention, many companies have still not launched an IoT project. A report from the MIT Sloan Management Review published just nine months ago revealed that the majority of companies responding to their international survey (3 in 5) did not currently have an IoT project in place.


However, as Stephanie Jernigan and Sam Ransbotham note in the journal, the flipside is that 2 out of every 5 organizations are moving forward with IoT. The important thing, then, is to figure out what can be learned from the early adopters.


How do you move forward with successful IoT?


Here are the three best practices that seem to differentiate the most strongly successful adopters of the internet of things from the ones who didn’t fare as well, according to the researchers:


#1 best practice – Think big, but act small.


When businesses succeed with their first attempts at the IoT, they don’t get too grandiose with its scale. They select a direction that does not stretch the budget and does not employ excessive devices. A key project mentioned by the researchers is the Array of Things (AoT), a network of sensor-containing boxes currently being installed throughout Chicago to gather and analyze real-time data from the infrastructure, environment, and movement for public and research applications. AoT “will essentially serve as a ‘fitness tracker’ for the city,” notes the project’s FAQ page, “measuring factors that impact livability in Chicago such as climate, air quality and noise.”


Reliability is essential because maintenance is a particular challenge of IoT projects such as this. The MIT research team notes that the AoT has been moving slowly with the launch specifically because they need to make sure they know exactly what the reliability of nodes is. According to the University of Chicago, the first 50 of a total 500 nodes were installed in August and September 2016. The project continues to work in stages through its completion, with all nodes set to be in place by December 2018.


There is another side to size with IoT too. You don’t just have to take care of the devices but the interpersonal connections that are impacted through these means. Companies studied by the MIT researchers typically focused on a single group or smaller group of people (rather than all the company’s points of connection), making the project easier to control from a relationship perspective.


A benefit of starting small and more niche is that you are less likely to create a headache for yourself in terms of integration moving forward.


#2 best practice – Embrace both short-term and long-term vision.


Jernigan and Ransbotham advised first coming up with use cases that might be worthwhile for your firm and then calculating the ROI from each of them. To a great extent, you should be able to come up with numbers associated with the project. Executives that replied to the MIT poll said that they had been able to come up with specific numbers showing the advantage of IoT via:


  • Rise in earnings (23%)
  • Rise in supply chain delivery or accuracy (20%)
  • Drop in fraud or other crime (16%)
  • Rise in harvest or manufacturing yields (15%)


The respondents said that these were each reliable ways to gauge effectiveness.


However, it is not enough to simply think in terms of what’s happening right now. When you move forward with the internet of things, it’s important to think about how the insight from the current project can be reintroduced to something more expansive. The MIT scholars note that some enterprises have started out collaborating on the Array of Things before jumping into other ventures.


Once you have your own internal project going, you will quickly think of other applications, says Silicon Labs IoT products senior VP Daniel Cooley – how you can put the data from the devices to the best possible use. “[S]omeone puts this wireless technology in place for a reason[,] and then they find different things to do with that data,” he says. “They very quickly become data stars.”


#3 best practice – Keep looking at different options.


It is key that you are able to see an obvious ROI from your internet of things project, that the data is needed, and that you are gathering it by the best possible means.


Nearly two-thirds of those surveyed by MIT (64%) said that they could not get the results that they have achieved with the IoT in any other way. The reason that the Array of Things took form is that the Urban Center for Computation and Data wanted to be able to answer questions about city concerns through data. Realizing that they did not have all the information they needed, they had to think about their options.


For instance, the UrbanCCD wanted to analyze asthma rates to see how it related to traffic and congestion levels in certain neighborhoods. Leadership at the organization started to think that sensors, connected to the web and distributed throughout the streets of Chicago, would be the ideal way to get reliable information directly from the source. Jernigan and Ransbotham noted that the scientists at the center did not immediately gravitate toward the IoT. Instead, they had a problem, and setting up IoT sensors was the most reasonable fix.


The MIT team highlights a number of other key findings about the internet of things:


  • Companies with advanced analytics skills have more than triple the chance of deriving value from the internet of things than firms with less developed skills in that arena.
  • The IoT ties together devices, but companies as well. This fact “necessitate[es] managerial attention to the resulting relationships,” say Jernigan and Ransbotham, “not just technical attention to the devices themselves.”
  • The IoT ties firms to government agencies and other industry players in addition to their customers and vendors.
  • Generally, a large economy of scale is a good thing. That’s not the case with the internet of things, though. It’s often possible that expenses grow faster than the network of devices.
  • The internet of things is based on sophisticated bases, including its technical infrastructure and analytics; and it amplifies these complexities.
  • The advantage of the complexity is that those who thrive on contemplating different processes and systems are awarded.




Do you want to experiment with the internet of things? Note the emphasis on technical infrastructure as a foundation for an enterprise-grade internet of things project. At Total Server Solutions, our High Performance Cloud Platform uses the fastest hardware, coupled with a far-reaching network. Build your IoT project.

Major Cloud Developments in 2017

Posted by & filed under List Posts.

Over the past few years, cloud computing has become an increasingly central topic in discussions of top IT trends and concerns. The technology was one of the main points of focus of the Forrester report “2017 Predictions: Dynamics That Will Shape The Future In The Age Of The Customer.”


Forrester’s discussion notes that while cloud computing has been an extraordinarily powerful cross-industrial disruptor, it also is no spring chicken. Cloud, now entering its second decade, is “no longer an adjunct technology bolted onto a traditional infrastructure as a place to build a few customer-facing apps,” notes Forrester. Software, platforms, and infrastructure “as-a-service” have been refined and perfected to meet a complete range of business needs, from mission-critical backend programs to customer mobile apps.


2017 will usher in further expansion of the cloud. Enterprises already had adopted various clouds prior to 2017, but the commitment to a multi-cloud ecosystem will only increase this year, suggests the report. CIOs will be challenged by management and integration of various clouds between customers, employees, affiliates, and vendors.


Since the organization, control, and administration of these technologies will be a growing concern for IT leaders, they will turn to networking, security, and container solutions to better facilitate easy management.


Public Cloud Outpaces Previous Forrester Forecast by 23%


At the essence of the notion that a particular approach or type of system deserves your attention is that it causes disruption, as noted above. As far as the cloud goes, its disruptiveness will remain unchecked at least through 2020, according to Forrester principal analyst Dave Bartoletti in ZDNet. Bartoletti adds that this will be the first year in which enterprises really start making a seismic shift to the cloud – fueling the market like never before. The result is that the total size of the public cloud market worldwide will hit $146 billion this year, by the industry analyst’s numbers, rising from its $87 billion scope in 2015.


Forrester does not see cloud plateauing this year either, though. Calling the technology “the biggest disruption in the tech market in the past 15 years,” the business intelligence firm forecast last year that the amount spent on SaaS, IaaS, PaaS, and cloud business services would rise at a 22% compound annual growth rate from 2015 through 2020. The $236 billion size of the market that Forrester now estimates for 2020 is 23% larger than an earlier report from the company. Plus, the Forrester researchers now predict a more expansive transition to cloud from legacy software, platforms, and services.


3 Reasons Business Can’t Ignore Cloud


To demonstrate how critical cloud computing has become, Bartoletti offers three reasons business can’t ignore the technology:


  • Pay-per-use is just one way to buy cloud. 2017 will see a greater focus in the industry on shaping more diverse payment models: pre-paid, on-demand, reserved capacity, and enterprise agreements.
  • More lifting and shifting. Cloud migration will become simpler with better lift and shift functionality. With massive infrastructure-as-a-service providers offering a lift and shift approach to transfer systems into their environment, this method is becoming more prevalent.
  • Challenges remain with hybrid cloud networking. Hybrid cloud is now deployed heavily throughout industry. However, says Bartoletti, “it will take a long time for most organizations’ networks to be able to seamlessly connect into hybrid cloud management and orchestration tools.”


In this climate, it’s necessary to establish clear plans related to software-as-a-service and private cloud. Furthermore (and as indicated above), the management capacities for security, networking, and containers, as well as hyper-converged infrastructure, are becoming important areas of expertise for CIOs in 2017.


7 More Central Cloud Developments


That advice Bartoletti gives is what he seems to consider the most salient aspects of a broader list of major cloud trends that was constructed by Forrester called, “Predictions 2017: Customer-Obsessed Enterprises Launch Cloud’s Second Decade.” In an overview of the report, Forrester enterprise architecture analyst Charlie Dai hits some of the same points covered elsewhere with some to-do items for 2017 before outlining the developments:


  • Solidify your 2017 private cloud and software-as-a-service plan (and really now extending it through 2018).
  • Get informed about how your technology options are changing with innovations in containers, networking, security, and hyper-converged architecture.


Here is the remainder of the 10 cloud developments for 2017:


  • Private clouds will become more prevalent as hyper-converged infrastructure is adopted more aggressively.
  • The so-called “megaclouds” (SAP, Salesforce, Google, AWS, Microsoft, IBM, etc.) will become less dominant as growing competition and differentiation based on service and niche opens up the market.
  • Expensive, heavy, and complicated private cloud suites will become less popular.
  • Software-as-a-service will become more customized and distinct in the needs that it meets.
  • Cloud development worldwide will hinge heavily on Chinese companies.
  • Cloud management and platforms will be heavily impacted by greater use of containers.
  • Security will become a more standard tie-in with cloud service offerings.


Additional Forrester 2017 Predictions


Let’s branch out to a little broader context and look at the company in which the cloud finds itself; what other technologies were highlighted by Forrester in its 2017 Predictions? The other areas of tech that Forrester covers in “2017 Predictions: Dynamics That Will Shape The Future In The Age Of The Customer” (with the cloud subsection covered above) are artificial intelligence (AI); the internet of things (IoT, which is heavily dependent on cloud); and virtual and augmented reality:


  • Artificial intelligence: The report points out that users have provided an extraordinary amount of data about themselves to companies. “From a customer’s point of view,” says Forrester, “that is OK (sort of) if the company uses that same data to deliver valuable, personalized experiences.” However, data is locked in numerous environments, and companies are generally not integrating with one another for more complex insights. This year, AI integration will soar, brought about by a stronger demand for knowledge about user behavior via mobile, IoT, and wearables.
  • The Internet of Things: The internet of things is growing astronomically as companies have begun to realize its potential to drive revenue. However, the manner in which IoT is being applied is unstructured – since the industry is still taking shape. Use cases, protocols, standards, programs, and equipment are highly varied. During 2017 and leading into 2018, the IoT is becoming increasingly sophisticated. Modern microservices will underlie internet of things plans, which will extend across cloud servers, gateways, and edge devices. IoT devices are security concerns, though: they can be hacked and turned into DDoS slaves.
  • Virtual and augmented reality: Following the ascent of Pokémon Go, Forrester is doubling down with its predictions for the continuing rise of virtual and augmented reality. The analyst lists three key insights relate to VR and AR: 1.) IT costs and power will keep getting more manageable; 2.) Developers will keep working with an array of tools, leading to innovative approaches; and, 3.) Since there aren’t “best” applications or use cases in this field, the market will experience a gradual and systemic evolution that is carefully strategized.




Do you want to be better prepared for the Age of the Customer? As with the other technologies highlighted by Forrester, you don’t just need a cloud platform. You need the fastest, most robust cloud platform in the industry. Your cloud starts here.

Jazzercise Rebrand with Magento

Posted by & filed under List Posts.

Jazzercise needed to change its image and upgrade its online presence. The company rebranded in 2015 and launched a new Magento site in 2016 – resulting in 20% fixed operating cost reduction and 14% higher revenue.


Jazzercise. Yes, it’s a household-name exercise program, but it certainly has not been the workout of choice for millennials. That’s in part because the brand has struggled to recover from an 80s image – when the fitness world saw a heyday with the rise of stars such as Jane Fonda and Richard Simmons. Probably no one is more emblematic of that erstwhile “get in shape” craze than Simmons – whose 1988 sensation Sweatin’ to the Oldies made him virtually synonymous with slimming down in a positive, self-affirming, and entertaining way.


This association with Simmons is problematic because he has been treated rather mercilessly in the media. The jokes of talk show hosts such as Howard Stern and David Letterman were sometimes harmless and other times cruel, as was also true with random amateurs (see this YouTube user). The manner in which Simmons was framed as a laughingstock is troubling, given how emotionally fragile he seems to be – and the fact that he has receded from TV since 2014. What does all this mean from a branding perspective? Jazzercise identified that it was stuck in the past, as part of that same corny fitness trend that made it so easy for pop cultural figures to ridicule Simmons.


Related: “The Right Ecommerce / Brick & Mortar Balance”


Jazzercise wanted to be taken seriously, and in order to do so, the brand had to come of age. Let’s look at how Jazzercise updated its brand image and how the company’s adoption of Magento helped it to recover momentum.


Jazzercise Rebrands


Jazzercise has actually been around for 48 years; now headquartered in Carlsbad, California, the company was originally launched by Judi Sheppard Missett in Evanston, Illinois. In 2015, Jazzercise began its reformulation, rebooting its logo and color palette while introducing a new ad campaign; in 2016, the brand switched to Magento for better e-commerce presentation. Formerly considered a softer, subtler exercise program, the company’s new approach incorporates movements inspired by hip-hop dance, Pilates, and even kickboxing. If this sounds like an overhaul, it is; the slogan of the campaign was actually, “You Think You Know Us But You Don’t.”


Group fitness classes were already a part of American culture before the popular tidal wave of exuberant hip-swiveling that ushered in the 90s. “In the ’80s is when we saw [fitness instruction] really take off, and Jazzercise was a very big part of that,” explains American Council on Exercise (ACE) senior advisor Jessica Matthews. “That’s when you started to have this identified profession.”


The Big Business of Dance-Inspired Workouts


Let’s get something straight so that it’s clear Jazzercise is not a sinking has-been: the company is valued at $100 million, and it’s currently #81 on Entrepreneur’s list of the 500 fastest-growing franchises. True, 2014 saw shrinking in the number of Jazzercise locations; but the dip in numbers was effectively corrected with the 2015 rebrand and 2016 move to Magento.


Jazzercise franchise units (https://www.entrepreneur.com/franchises/jazzerciseinc/282474).


The brand credits much of its success to something that any marketer or salesperson can appreciate: framing. The perspective that the particular take on fitness allows people to take is different from what was on the market previously. Jazzercise positions exercise as dance, and people don’t think of dance as exercise. It allows people to do something they enjoy, rather than having to push themselves through something miserable. That may not convince you to sign up for a membership, but it does help explain the popularity of the model and the essence of the company’s differentiation.


While dance is fun, increasingly the fitness firm has recognized the need to update both the dance moves and the songs in order to keep customers engaged. Jazzercise credits that flexibility, the updating and continual reformulation of what people experience in its classes, to its kind of incredible retention: the average customer stays for seven years. However, for initial attraction, Jazzercise now also focuses centrally on effectiveness. According to the brand, you can burn 500-600 calories in a one-hour session; and independent assessments of dancing’s impact on calories suggest that’s possible (although it might be closer to 400 calories for the average person).


A New E-Commerce Platform as a Springboard for Further Growth


Jazzercise isn’t just an in-person entity, of course. Yes, the physical franchise model is at its core; but today, Missett (still the firm’s CEO) and her team release new branded exercise clothing and accessories via the company’s e-commerce platform each month.


Up until 2015, Jazzercise was with Amazon Webstore, and when that segment of the tech giant was shut down, Jazzercise had to rethink its approach. One of the things that had frustrated the company about the Amazon system was that they couldn’t integrate their enterprise resource planning (ERP) system with the platform’s API. That issue caused “syncing delays that dominoed into inventory discrepancies, and fulfillment and accounting nightmares,” according to a Magento case study on Jazzercise.


Mobile use overcomes desktop (http://bgr.com/2016/11/02/internet-usage-desktop-vs-mobile/).


The company was concerned about mobile device support also – since mobile use is now greater than desktop use globally, and since smartphones and tablets are now increasingly used by e-commerce shoppers. Since the Amazon platform did not support responsive templates, that meant the company had to manage both desktop and mobile sites.


Jazzercise needed more thorough and agile merchandising options. The capacity of Webstore in this category was not meeting the fitness brand’s expectations, hurting revenue.


Jeff Uyemura, the Jazzercise digital manager, specifically points to the issue of personalization and how Magento has allowed the company to customize its approach for each user. He said the decision was made to switch because the technology “allowed us to target key customer segments more effectively and offer unique content and price points.”


Impact of Magento on Flexibility, Costs & Revenue


As indicated above, when Jazzercise switched over to Magento, they were able to integrate it with their ERP platform so that there was no longer a delay in syncing data. That near real-time processing allows inventory to be consistent throughout the ecosystem and prevents overselling.


It was also possible within Magento for Jazzercise to upload a custom mobile theme and use it to just have one site that would correctly populate the site on any type of device (rather than the four separate ones, B2B and B2C mobile and desktop. it had when the transfer was made). With this simplicity, Magento has allowed the brand to lower its maintenance costs and create more seamless digital brand consistency.


In terms of merchandising, the e-commerce platform has allowed Jazzercise to highlight certain items in each category and showcase them when a catalog launch occurs. Plus, creating a design that is personalized to the customer provides better targeting.


The brand’s Magento site went live in June 2016. Uyemura credits it with bringing e-commerce fixed operating costs down 20% and boosting online revenue 14%.




Would you like to see a similar reduction in e-commerce costs and improvement in revenue? The Magento platform can only deliver the speed and reliability you need to impress prospects and customers if it’s backed by the right infrastructure. At Total Server Solutions, we offer high-performance Magento hosting, along with optional merchant accounts so you can sell and accept payments quickly and easily.

Ecommerce and Brick-and-Mortar

Posted by & filed under List Posts.

We know that the average shopper has needs that are met in-person, as well as ones that are met through digital channels. How can companies balance their efforts between online and offline for the best possible results?


Stats that prove the transition to online shopping


When we think of a store, the first thing that might come to mind is a physical one. We walk through the door and can browse through the aisles, picking products up and trying things on before making our decisions. Virtual reality may offer “full immersion,” but the real “full immersion” is reality itself: the product in your hand.


However, that value of in-person inspection comes at the cost of relative convenience, as more options have emerged online and people have grown increasingly comfortable shopping for and paying for items through their computers and mobile devices. While the landscape is changing, the position of storefront retail is changing; and yes, it is clearly on the decline. Fourth-quarter industry statistics from Investor’s Business Daily show poor numbers for all the retail groups the publication monitors; in fact, the Department Stores category is last out of all industries – ranked 197 out of 197. The good news is that this devastation to the world of B&M is aligned with an expansion of e-commerce sales – a 29% overall rise during the 2016 holidays.


Another way to see this trend is in comparing the Q4 2016 results to determine the “online growth edge” for a couple of major box stores (IBD):


Brand E-commerce Storefront Online growth edge
Target +34% -1.5% +33%
Walmart +29% +1.8% +27%


As e-commerce continues to become more sophisticated and better able to address consumer expectations, what is the value of a physical storefront? We know it’s not the heavy-hitter it once was. It’s not just clear from the above megabrands that are straddling the fence but from those that have gone bankrupt or are closing their stores nationwide, such as American Apparel, The Limited, Wet Seal, Aeropostale and Pacific Sunwear.


Is B&M sinking into oblivion?


There is sufficient evidence to suggest that the physical store can be viewed as similar to snail mail: it is useful to many now but will only become increasingly irrelevant, says this perspective. That’s not quite right though. In essence, the rise of digital does not signal the demise of brick-and-mortar so much as an evolution of the way that people shop and a shift in the role of stores to serving a more functional, mundane purpose as a distribution point.


Boston Retail Partners principal Ken Morris uses the example of Restoration Hardware to make this case. The showrooms of the brand are settings for inspiration, notes Morris. “[T]hey’re not really selling anything there,” he says. “It’s like a giant 3D real-time catalog.”


BRP’s vice president and practice lead, Perry Kramer, adds that the service experience needs to be treated as paramount in order to win at storefront retail in the new age. The example he gives is the Apple Store, where you can try products and get advice from salespeople who are generally considered well-trained and helpful.


How omnichannel goes beyond multichannel as an integrator


You may have heard the word omnichannel a bunch of times and perceive it as one of those annoying marketing buzzwords; but actually, omnichannel is an important business concept.


You can think of omnichannel as a type of multichannel or even the newer, savvier evolution of multichannel. “[A] multichannel approach to sales that seeks to provide the customer with a seamless shopping experience,” TechTarget defines omnichannel, “whether the customer is shopping online from a desktop or mobile device, by telephone or in a bricks and mortar store.”


What differentiates omnichannel from multichannel? In a nutshell, it’s integration. Omnichannel involves backend integration rather than just diversification of channels. Compare the above description of omnichannel to a definition of multichannel provided by Jay Acunzo in the HubSpot Blog. Acunzo defines the simpler multichannel concept as communication across various channels, both digital and otherwise. Multichannel is about marketing in many different places at the same time; omnichannel is about bringing together the insight from each approach.


There is another aspect of omnichannel that is evident in its name. While multichannel is about many avenues you can go (see its prefix multi-), omnichannel is about addressing every possible channel (see omni-). Omnichannel is a more thorough approach based on the idea that people now expect to be able to shop, experience your brand, and engage with you as a customer through a full range of possible means (for example, within all the various social media sites, brick-and-mortar stores, your websites, and your mobile apps).


To better understand how an omnichannel strategy can be leveraged by a brand, just look at what customers think should be available to them. An expectation of nearly three-quarters of shoppers (71%) is that brands will have in-store inventory data available online. Similarly, an expectation held by half of customers (50%) is that they be able to buy on the Internet and pick up items in-person (“Customer Desires vs. Retailer Capabilities: Minding the Omni-Channel Commerce Gap,” Aberdeen).


The final, fundamental reason why omnichannel is such a key concept for your company’s growth is that these consumers are big spenders. “Omnichannel shoppers are typically a retailer’s most valuable customers—spending over five times as much as those who only shop online,” notes a Bain & Company report. “Creating a great experience for those customers is critical, and not doing so is very risky.”


Forgetting the money and simply looking at omnichannel in terms of user experience, your users should be able to shop more efficiently and without having to stop and start along the way. Customer service should be as sophisticated as possible; and brands often neglect that concern, so integration of different touchpoints via omnichannel is a powerful differentiator.


3 brands with omnichannel to emulate


Here are three household-name brands with omnichannel user experiences that are noteworthy and worthy of mimicry:


  1. Disney – This incredibly popular family brand has embraced omnichannel with its My Disney Experience tool, which allows consumers to comprehensively plan their trips, from getting Fast Passes to the park to pre-determining dining locations. Within any Disney park, you can find attractions and wait times via the mobile app. The Magic Band program, which offers Fast Pass integration, adds further capabilities and complexities: hotel room key functionality, storage of photos with Disney characters, and food ordering.
  2. Bank of America – This brand is considered a bellwether in finance related to omnichannel. The company’s tools include the ability to deposit checks and schedule appointments both via mobile and on desktop. Additionally, customers are able to pay their monthly bills seamlessly through any device.
  3. REI – This company provides clear product data throughout its customer ecosystem, says Aaron Agius in the HubSpot Blog. “[T]hat kind of internal communication will keep customers happy, satisfied and returning back to their store again and again,” he adds.




Hopefully, the above advice can help you address the need for balance between online and offline shopping at your company. Do you need help getting your e-commerce site up and running, or improving the performance of your current site? At Total Server Solutions, we support all of the top shopping cart applications and also offer merchant accounts so you can sell and accept payments quickly and easily. See our secure e-commerce solutions.