Cloud integration

Posted by & filed under List Posts.

<<< Go to Part 1

6.) Consistency With New Releases – Martin Welker at Zenkit

Zenkit is a project management app. It allows you to perform tasks such as study your data analytics; find points of reference between the information that you may not have previously noticed; create filters and aggregations; and design formulas. Martin Welker, the CEO and founder of this company, sold his first application when he was only 15. He has been developing business productivity software for more than 20 years. His customer base, at 5 million, is just slightly over the entire population of South Carolina. The fact that Welker is paying so much attention to emergent data strategies should not be surprising, since it is aligned with his appreciation for cloud.

Welker explains that there are many reasons it makes sense to use cloud when you need app hosting. These are, in his opinion, the most compelling things that the technology has to offer:

  • It is reliable. You will often get much better reliability with a cloud backend than you will with one that you architect and build in an on-premise datacenter. You are able to include a Service Level Agreement that guarantees, by extension, the service levels that are guaranteed by a credible infrastructure provider. Your systems will be live 24/7, Welker says – and throughout, your uptime will be maintained at a high level (assuming you have a solid SLA and are working with a host that gets praise from third-party reviews). You will also get high reliability from the IT staff running your servers, because they will be monitoring around the clock, there to answer and resolve any late-night support questions or issues that arise.
  • It boosts your adoption. Cloud gives you a really low barrier to entry. People are used to web-based software. It’s possible to get people to register with a single click. Since everything is Internet-based, anyone who is online is a potential user or customer.
  • It is built for the peaks and valleys. You can scale – not just grow, but fluctuate in response to demand without having to change your servers. That means you don’t need hardware padding, so you can operate on a leaner model.
  • It is becoming the new normal. Cloud is becoming ubiquitous, which also means that it is the only place you will be able to find certain cutting-edge or sophisticated capabilities.
  • It is highly flexible. You are able to introduce and release new patches and updates to your software that get applied throughout all users – because no one has to download anything (whether an end-user or a system administrator). If you screw something up in the code, it is fast to go back to the most recent version.

7.) Allows for Size-Ambiguous IT – André Gauci at Fusioo

Fusioo is an online database app, and André Gauci is its CEO. Gauci notes that the chief element cloud has to offer that is often cost-prohibitive in a traditional setting is scalability. This strength, also mentioned by Welker above, means that, for example, you are ready for Black Friday on-demand in November but don’t have to be ready for it year-round.

8.) Ease of Access – Tieece Gordon at A1 Comms

Tieece Gordon of UK telecommunications provider A1 Comms notes that cloud is the best setting for working and storing information, because it allows you to embrace a characteristic that is fundamental for business success: versatility.

You are able to become more versatile because you can better connect with people from all areas. Your data environments are seamless and ready right now for anyone to be able to access data irrespective of where they are.

In this setting, Gordon points out, you get better productivity and better efficiency. “Instant and simple access means there’s more time to put towards more pressing operations than trying to find something lost within a pile of junk,” he says.

9.) Recurring Revenue – Reuben Yonatan at GetVoIP

Get VoIP is a voice-over-IP review site that is built on a cloud infrastructure. Reuben Yonatan is the company’s founder and CEO. Based on a decade of experience with enterprise infrastructure, he notes that one of the best parts about a cloud-hosted app is that it gives you recurring revenue. How? If you create a subscription model for your app, the price of continuing to develop the app is covered by customers automatically (as opposed to having to convince people to upgrade to a new version of it).

Agreeing with some of the other experts highlighted in this piece, Yonatan says that the cloud is a way to:

  • remove stress from software updates;
  • allow for simple and broad access;
  • save money in your budget; and
  • scale as needed, since the physical act of introducing hardware is unnecessary within a cloud (except at the level of the entire ecosystem).

10.) Minimized Resource Staff – Aaron Vick at Cicayda

Cicayda is a legal discovery software. And Aaron Vick is its Chief Strategy Officer. One of Vick’s core specialty areas is technology workflow. He is sold on cloud hosting for two primary reasons mentioned by others within this report: scalability and mass-updating across the whole user base. From his perspective, it is those two capabilities of this virtualized model that allow it to far surpass what was possible with systems built in the 1990s and 2000s. Because you can scale on-demand, that means you do not have to worry about your system regardless how many people are making requests to your servers. By being able to introduce an updated version of the software to the entire population of users simultaneously, it saves money – since you don’t need to fund the resource staff that would be needed to maintain a traditional installation environment.

11.) Innovative Edge – Jeff Kear at Planning Pod

Planning Pod is a cloud hosted SaaS registration and event management system. The way Kear sees it is that innovation is one of the most effective ways to generate attention and foster customer retention. Plus, it gives you a way to differentiate yourself – absolutely critical within a crowded market. The app is running within a behind-the-scenes backend instead of on user devices, the software company is much better enabled to introduce new aspects and mechanisms than if they distributed file updates that would not always get installed locally.

Plus, Kear pointed out that a software company is able to test features across the whole user community rather than testing with small groups of users.

12.) No Big Initial Investment – Mirek Pijanowski at StandardFusion

StandardFusion is a government, risk management, and compliance (GRC) app that is hosted in the cloud – intended to make maintaining compliance and security more user-friendly. Mirek Pijanowski, the firm’s cofounder and CEO, notes that cloud is a great technology to leverage because it means that you don’t have to directly manage equipment.

“Every year,” says Pijanowski, “we reevaluate the time and costs associated with moving our applications away from the cloud and quickly determine that with the dropping cost of cloud hosting, we may never go back.”

Conclusion

Are you in need of cloud hosting for your business? At Total Server Solutions, we believe that a cloud-based solution should live up to the promise described by the above developers and thought-leaders. It should be scalable, reliable, fast, and easy to use. We do it right.

 

***

 

Note: The statements by the developers that are referenced within this article were originally made to and published by Stackify.

This carnival ride goes to the cloud.

Posted by & filed under List Posts.

We all know about the growth of cloud computing. From a consumer perspective, the first thing that comes to mind is SaaS, or software-as-a-service. If we want to understand how seismic of a shift this form of technology is imposing on IT, though, we need to look at the building blocks, platforms and infrastructure.

Let’s look at cloud growth speed to better gauge this transition. Worldwide between 2015 and 2020, industry analysts from Bain & Company say that cloud IT services will expand from $180 billion to $390 billion — a head-turning compound annual growth rate (CAGR) of 17%. Who is buying these services? Well, it’s not just the startups, that’s for certain. 48 of the Fortune Global 50 have publicly stated that they are moving some of their systems to cloud hosting (some making the adoption more aggressively than others). These findings are from the Bain report “The Changing Faces of Cloud,” which also shows us a breakdown of the key cloud types: from 2015 to 2020, SaaS is predicted to grow at a 17% rate, while the combined group of PaaS and IaaS is forecast to expand at an almost staggering 27%.

Why? Why is cloud hosting growing so fast? Let’s look at why 12 app developers prefer cloud. (These are from statements credited to these individuals that were published by Stackify.)

1.) Download Speed – Jay Akrishnan at Kapture

Kapture, if you don’t know, is a customer relationship management (CRM) app. The company’s product marketer, Jay Akrishnan, says that what he likes about cloud is its ability to create a comprehensive data sync across all points of access. By achieving that unity of on-demand information, your constraints are removed from being able to fully embrace a communication system. Software is available to allow anyone to take advantage of its ability to exchange data, for business purposes such as collaborating between staff and with partners, and even for hobbies such as gaming. Akrishnan adds that cloud systems are generally stronger than what you could get with a VPN-based server.

Probably the most compelling point Akrishnan makes, though, is that the client must be able to rapidly download your app. If people cannot download very, very quickly, your app’s growth rate will suffer. Kapture relies on the cloud to deliver.

2.) Lowers Your Stress (to Increase Productivity) – John Kinskey at Access Direct

From a headquarters in Kansas City, AccessDirect provides virtual PBX phone systems to businesses around the United States. John Kinskey is its founder. The company hosts its core telephony app in the cloud – so the business is betting on the technology with its own performance and credibility.

Kinskey says that cloud is powerful because it creates multiple redundancies for your infrastructure by distributing your servers across various geographically disparate data centers. Also, and interestingly, the third-generation entrepreneur notes that the company moved this core software to cloud because of a couple pain points of operating the systems in-house. One was that they did not feel like they had enough redundancy. The other was that he felt they often had to spend too much at a time on labor and budget to maintain their own machines. To AccessDirect, he notes, both of these elements were causes of stress that were relieved by cloud.

Since Kinskey mentioned that aspects of being in a legacy scenario can be stressful, that emotional side should not be overlooked. If your current approach of using your own data center for an app is making your workplace or your own thoughts less calm, a couple of reports from risk management and insurance advisory Willis Towers Watson – both on its Global Benefits Attitudes surveys – are compelling. The 2014 report found that there was a high correlation between stress and lack of motivation on the part of staff: 57% of people who said their stress level was high also said that they were disengaged. Compare that to a 10% disengagement level among those who say their stress is low. Add to those numbers a key finding revealed by the 2016 report – that fully three-quarters of employees say stress is their top health concern. The bottom line is that if hosting your own app is stressful, it’s a wise business decision to move to cloud.

3.) Real-Time Decisions – Lauren Stafford at Explore WMS

Explore WMS actually gives us a great perspective because it is not software itself but rather an independent resource for supply chain professionals. The important concern for the journal is still the same as its leadership – knowing that this form of hosting can efficiently deliver warehouse management software. The media outlet’s digital publishing specialist, Lauren Stafford, notes that the key need she feels is met by cloud hosting is immediate data access. Integration is often tricky in an on-premise setting since you need to think about how to configure servers; in the cloud, she explains that you are able to build more flexibly. By getting real-time data to the end user, the company has the insight it needs to make better decisions, moment by moment.

In the case of warehouse management systems (the critical point for Explore WMS), business clients are able to get actionable real-time inventory details with a status that is reliable and relevant right now – important especially if a shipment gets delayed.

4.) Meets Simultaneous Needs – Dr. Asaf Darash at Regpack

Regpack is an online registration portal that has some big-name clients, such as Goodwill, the NFL, and Stanford. The system is designed using the knowledge its founder, Dr. Asaf Darash, attained while earning a computer science PhD focused on data networks and integration. It is notable, given that substantive academic background, that Darash believes in cloud hosting. He says that cloud is the best choice because it allows you to be able to see and work with your information from any location.

Interestingly, he also points out that when offering software-as-a-service, using cloud allows you to meet that need for your clients since they share that desire to work when they are at home or on a trip to another state or country. In other words, cloud meets that need simultaneously for both you and your clients. You can access your data with a web connection, take a look at your analytics, and grab whatever need-to-know details are in the system rather than having to download everything at the level of your own in-house server.

5.) Solves Problems Before They Happen – Kevin Hayen at Let’s Be Chefs

Let’s Be Chefs is an app-based weekly recipe delivery service, so it relies on the cloud for timely interaction with all of its service’s members. Hayen says that there are far fewer operations-related frustrations (to simply keep everything together and moving) that suck time away from laser-focusing the firm on growth and development. Scaling machines is no longer an issue for Let’s Be Chefs. In this way, it creates immediate peace of mind for startups, he says.

Conclusion

Check out the 7 other perspectives on cloud ‘s benefits for app hosting. Or, do you want high-performance cloud hosting for your application right now? At Total Server Solutions, we give you the keys to an entire platform of ready-built, custom-engineered services that are powerful, innovative, and responsive. Spin it up.

5 Ways Cloud Computing Helps Your Business

Posted by & filed under List Posts.

Companies across a broad spectrum, from shoestring startups to Fortune 500 enterprises, are wondering how they can better incorporate cloud computing into their organizations. There are manifold ways in which this technology can improve your efficiency and results. First, let’s look at the growth numbers for cloud to see how much is currently being invested in these systems and tools.

 

Gartner underestimates cloud growth by $600 million

 

One way to know how fast the need for public cloud services is growing is that the industry analysts are having a hard time keeping up with it. In September 2016, Gartner announced its updated projection for the market: that the sector would expand from $178 billion in 2015 to $208.6 billion in 2016 – a 17.2% rise. The primary reason for the increase is one of the three primary cloud categories, infrastructure as a service (IaaS) – forecast to skyrocket with a 42.8% revenue bump in 2016.

 

While that type of fast growth may sound unsustainable, in February, Gartner issued a new analysis for 2017 with an increased growth rate of 18%. The projection is an increase from $209.2 billion in 2016 to $246.8 billion in 2017. Again, the most powerful expansion will occur with IaaS, only slightly decelerating from its breakneck pace to climb to $34.6 billion in 2017, a 36.8% ascension. Do these new projections seem overly optimistic? Gartner is simply responding to its own underestimation of the segment; note that $209.2B starting figure for 2016, outdoing the $208.6 billion prediction from September. That may not seem substantial comparing the totals, but it means there was an additional $600 million of business generated that Gartner did not foresee.

 

What are the specific ways that cloud can help your business, though? How does it empower your mission and goals?

 

By facilitating ease-of-use

 

Installing security patches and software updates can be tedious and time-consuming. Since that’s the case, smaller companies will often use IT contractors who can get overbooked and fall behind on individual client needs. Alternatively, sometimes the IT role is taken up by a full staff member such as the office manager, who does not have the time or training to handle every issue. These ways of approaching the digital world can be expensive, labor-intensive, and unnecessarily stressful. By using a cloud provider to deliver your IT environment, explains entrepreneurial mentoring nonprofit SCORE, everything will be patched and updated automatically. Do you have an IT team? In that case, says the association, cloud is still a strong move by relieving your employees of maintenance tasks so that they can be squarely focused on emergent tech and new business development.

 

By delivering scalability so your company can grow

 

The elasticity of cloud systems gives you access to new resources on-demand, so that you can increase the depth of your infrastructure for peak times such as the holidays or after you get press, and giving you the ability to grow your backend exponentially if you hit a tipping point. This characteristic is critical because it is challenging for firms to figure out what they will require in computing resources. Cloud sets aside the guessing game, letting you react as traffic or user behavior changes, making sure you have enough fuel to keep expanding without waste. Growth is not just about customers, of course, but adding your own systems. If you add a collaboration tool, the resources are immediately “at the ready” to allow it to run effectively. Your company is better able to adapt in the moment; in other words, it is more flexible.

 

By protecting you from Internet crime

 

Sony, Target, and Home Depot have all been ferociously hacked on a massive scale. Actually, hackers have taken huge strikes at the federal government too. There were numerous reports that hackers (believed to be Russian) who had intruded into the White House and State Department email system in 2014 were continuing to evade the government’s efforts to remove them. In this landscape, security is increasingly challenging. Furthermore, while these examples of cyberattack are so vast in scale that it may make you think your business is too small to interest intruders – but in fact, small businesses are particularly vulnerable. Statistics are compelling along these lines:

 

  • The threat is real. Incredibly, 2012 figures from the National Cyber Security Alliance show that 1 out of every 5 small companies were already being hacked annually. Among the group that was infiltrated at that time, the NCSA estimated that 3 in 5 were bankrupt in just 6 months.
  • When you go offline, your revenue suffers. According to Andrew Lerner of Gartner, polls of business executives suggest that the average cost of downtime is $5600 per minute. What about going down for an hour? In that case, the average loss is $336,000.
  • Let’s take an example of a DIY tool that businesses often fail to protect. Many, many small businesses use WordPress. The W3Techs Web Technology Surveys (accessed September 5, 2017) show that WordPress is used on 59.4% of sites that have identifiable content management systems – translating to 28.6% of all sites globally. WordPress is a big target because it is so popular; for example, Threatpost reported in February that “attackers have taken a liking to a content-injection vulnerability disclosed last week and patched in WordPress 4.7.2 that experts say has been exploited to deface 1.5M sites so far.”

 

The point? In this dangerous context, the case for digital security is compelling. Specifically related to #3 above, that is just one example of the security risks in play that could be obstacles to business. Reducing the risk of online attack increases the strength and certainty of your firm’s development.

 

Enter the cloud: credible, knowledgeable partners substantially boost the safety of most small business data (provided you do not have industrial-grade security mechanisms and monitoring on-site). As SCORE advises, “Storing your data in the cloud ensures it is protected by experts whose job is to stay up-to-date on the latest security threats.”

By allowing your team to work together

 

A lot of business discussion lately has been about the value of collaboration. For instance, one of the most often-praised benefits of an open office layout is how it enhances collaboration (although, there is definitely not complete agreement that open offices deliver on their promises). Regardless how effective that design approach is, it signals how important the integration of numerous people’s perspectives is to business.

 

How can cloud help with this business needs? It is collaborative by design. Cloud-hosted apps are accessible 24/7 from virtually any web-connected device. That means, effectively, that your business no longer has walls in terms of your ability to let people interact with your systems to meet your business objectives. Through a cloud ecosystem, personnel and other partners in widely distributed geographical locations can work together on the same file (which is automatically backed up if you’re using a high-quality provider).

 

Through a well-managed cloud platform, people from all over the country, and internationally, can contribute to the project without having to worry about repeatedly passing files back and forth through email. While email is still a comfort zone for many, its model of sending files back and forth is less efficient than the cloud. Plus, it creates more potential for a space-time paradox, the accidental creation of a second “working copy” of a project (the results of which Doc Brown warns could “destroy the entire universe”).

 

By being ready to launch, now

 

Another key point about cloud, mentioned above but that deserves its own attention, is that there is no ramp-up time for a cloud system. You can access one today.

 

Do you need a cloud system that is easy to use, highly scalable, reliable, fast, and secure, so you can start collaborating at a moment’s notice? At Total Server Solutions, we do it right.

Split Testing E-Commerce Revenue

Posted by & filed under List Posts.

Sometimes it can be difficult to figure out exactly what it is that is making your company’s growth plateau or shrink. In fact, it is often challenging to even perceive some potential culprits because they seem so fundamentally beneficial. Nonetheless, it is important to ask hard questions – and, in so doing, put different aspects of your company under a microscope – if you want to grow. (For example, have you truly adopted high-performance hosting so that your infrastructure is furthering UX?)

 

In that spirit, here’s a question: Is it possible that split testing (or A/B testing) could be hurting your e-commerce revenue? Clearly, the concept behind split testing is a sound one: by presenting different versions of a page being shown to a random portion of your audience, you should be able to determine which one is preferable based on how well each version is able to turn visitors of the site into users or customers. This method has even somewhat controversially been used by major newspapers to split-test headlines, driving more traffic to news stories to keep the outfits prominent in the digital era.

 

A/B testing seems to be a smart way to better understand how your prospects and users make decisions; so how could it hurt your revenue? Online growth specialist Sherice Jacob notes that the trusted, somewhat standardized practice often does not deliver the results that business owners and executives expect. Jacob points out that this form of digital analysis, somewhat bizarrely, “could be the very issue that’s causing even the best-planned campaign to fall on its face.”

 

In a way, though, it’s not bizarre. Thoughtful business decisions often have unexpected results. (Anything can be done well or poorly – such as your choice of host, which will determine whether your infrastructure is secure. Failure to look for SSAE-16 auditing is an example of a mistake made when picking a web host.) What mistakes can be made when split testing? How and why does it fail? Let’s take a look.

 

  • How many tails do you have?
  • The magic of split testing: Is it all an illusion?
  • Getting granular – 6 key questions for core hypotheses
  • SEO hit #1 – failure to set canonicals
  • SEO hit #2 – failure to delete the losing option
  • Results from your e-commerce hosting

 

How many tails do you have?

 

Analytics company SumAll put two copies of a page that were identical – with no differences whatsoever – into one of the most well-known split-testing tools, Optimizely. Option A beat option B by almost 20%. Optimizely fixed that particular issue; nonetheless, it does reveal how misleading the output from these experiments can be. Think, after all, if those pages had just one minor difference. You would then confidently assume that A was the better choice, and feel backed up by the software’s numbers.

 

The reason that an issue such as this might arise with A/B testing is due, fundamentally, to the design approach taken for the algorithms built into the program. These approaches are categorized as one-tailed and two-tailed. One-tailed tests are simply trying to find a positive connection. It’s a black-and-white solution. With just one tail, your weakness is the statistical blind spots, says Jacob. Two-tailed testing looks from two different angles at these e-commerce outcomes.

 

The distinction made by the UCLA Institute for Digital Research and Education helps to clarify:

 

  • One-tailed – Testing that is based on determining whether there is a relationship from a single direction “and completely disregarding the possibility of a relationship in the other direction.”
  • Two-tailed – No matter which direction you use to address the relationship, “you are testing for the possibility of the relationship in both directions.”

 

The magic of split-testing: Is it all an illusion?

 

In 2014, conversion optimization firm Qubit published a white paper by Martin Goodson with the shocking title, “Most Winning A/B Test Results are Illusory.” In the report, Goodson presents evidence that shows that poorly performed split testing is actually more likely to lead to false conclusions than true ones – and, well, um, bad information should not be integrated into e-commerce strategy.

 

The crux of Goodson’s argument comes down to the concept of statistical power – which can be understood by thinking of a project in which you want the find out height differences between men and women. Only measuring one member of each sex would not give you a very broad set of data. Using a larger population of men and women, getting a large set of heights by measuring a lot of people, will mean that the average height will stabilize and the actual difference will be better revealed. As your sample size grows, you access greater statistical power.

 

To get back to the notion of split-testing, let’s say that you have two variants of the site you want to assess. Group A sees the site with a special offer. Group B sees the sites without it. You simply want to calculate the difference in response based on the presence of the offer. The difference between the two results should be considered in light of the statistical power (amount of traffic).

 

What is the significance of statistical power? Knowing what you want in a sample size (volume of traffic) will ensure that you don’t stop the testing before you have collected enough data. It is easy to stop and see false positives that lead you in the wrong direction.

 

Goodson says to think of a scenario in which two months would give you enough statistical power for results to be reliable. A company wants the answer right away, so they test for just two weeks. What is the impact? “Almost two-thirds of winning tests will be completely bogus,” he says. “Don’t be surprised if revenues stay flat or even go down after implementing a few tests like these.”

 

Getting granular – 6 key questions for core hypotheses

 

You want results that are meaningful from these tests. Otherwise, why bother? Think in terms of possible sources of confusion or frustration for visitors, either at the level of the hook or within the funnel, advises Qualaroo CEO Sean Ellis. Get this information directly from users via surveys or other comments.

 

Based on those bits and pieces, come up with a few hypotheses – your hunches about what you can do that might improve the conversion rate or give you better business intelligence. You can see whether or not those hypotheses are correct using the A/B tests, via an organized testing plan. A testing plan will make it much easier to strategize and consistently collect more valuable information.

 

These 6 questions can guide you as you develop your testing plan, says Ellis:

 

  1. What is confusing customers?
  2. What is my hypothesis?
  3. Will the test I use influence response?
  4. Can the test be improved in any way?
  5. Is the test reasonable based on my current knowledge?
  6. What amount of time is necessary for this test to be helpful?

 

That short list of questions can help you become more sophisticated with your A/B testing to avoid false positives and use the method in its full glory.

 

SEO hit #1 – failure to set canonicals

 

Split testing can hurt your SEO also. You need to set canonical URLs for each page because these two almost identical versions of the same page cause confusion for search engines.

 

SEO hit #2 – failure to delete the losing option

 

Another issue for SEO (and in turn for your revenue, if not your conversion) that can be caused by A/B testing is when you do not delete the page that loses in the comparison. That’s particularly important if you’ve been testing the choices for a while – since that generally means that the search engines will have indexed it.

 

“Deleting it does not delete it from search results,” notes Tom Ewer via Elegant Themes, “so it’s quite possible that a user could find the page in a search, click it, and receive a 404 error.”

 

Results from your e-commerce hosting

 

Just as you want to see impressive results (and not a downturn) from your split-testing, you want your hosting to be working in your favor as well – and for online sales, security and performance are fundamental. At Total Server Solutions, compliance with SSAE 16 is your assurance that we provide the best environmental and security controls for data & equipment residing in our facilities. See our high-performance plans.

How to Set Up a Non-Blog WordPress Site

Posted by & filed under List Posts.

While WordPress excels as a blogging platform because that was its original intended function, it has become increasingly sophisticated as a general tool to build websites. You could create an e-commerce shop, a portfolio, or a business site in this way.

 

Note that you could easily include (either upfront or at whatever point) a blog with that site if you want – as indicated below with the discussion of a page for posts. The blog does not have to be the defining centerpiece of your site, though.

 

Here is how you would go about setting up your WordPress as a static site, drawing a lot of ideas from WordPress themes and plugin company DesignWall.

 

  • What exactly is a static site (vs. a dynamic one)?
  • How to set the homepage of your static WordPress site
  • Creating the menus of your site
  • How to make your non-blog WordPress site stand out
  • The option of doing posts within their own page
  • Great hosting for strong WordPress UX

 

What exactly is a static site (vs. a dynamic one)?

 

A static site will have a homepage that does not change, no matter what new content you have go up elsewhere. That is in contrast to a dynamic site, which would be changing as you add new material – displaying the most recent posts from your blog. (Note that to be completely technically accurate, your site will remain dynamic if you use WordPress as its basis no matter what; however, you are essentially giving your site a static face regardless of its specific designation from a technical standpoint.)

 

The homepage will always use the same exact page – so let’s talk about that aspect.

 

How to set the homepage of your static WordPress site

 

You will be able to move forward with establishing this page whether or not you are just getting started with a new installation. Don’t worry about exactly what you want to say. You are able to create the page and then go back into it to figure out exactly what your message will be. Just follow these 6 steps:

 

  1. 1. Log into your WP admin account
  2. 2. Click on Pages in the left-hand sidebar, and select Add New Page.
  3. 3. Give it the simple name “Homepage” for now (which can be changed later).
  4. 4. Your theme may give you the option to turn off Comments and Pingbacks, typically both listed under “Discussion.” If those options are not available there, you will see them as small checkboxes on each page in the upper-right-hand corner above where it says, “Publish.”
  5. 5. To test and *go live with this page*, go into Reading Settings, which is within Settings in the sidebar.
  6. 6. There, you will see Front page displays, and you want that to be “A static page”; to complete this option, select “Homepage” and then Save Changes.
  7. 7. Look at your site, and you should see the Homepage displayed as your homepage.

 

Creating the menus of your site

 

It is time to establish menus for your static WordPress site. However, before we move forward with menus, think about what other pages you will need, and go ahead and create draft versions of those. Just create the pages at this point, without concerning yourself about the content. By having these pages at least in very rough place-holding form, you will be able to set up your navigation menu in a more logical and meaningful way.

 

Go ahead and add those pages the same as you did the Homepage. They can have simple names at this point. Beyond a homepage, here are the “must-have” pages for a 10-page business site, according to custom WordPress theme firm Bourn Creative: About, Services, Products, FAQ, Testimonials, Contact, Privacy Policy, Newsroom, and Portfolio. Adjust as makes sense for your industry and company.

 

Now go to the left Sidebar, click on Appearance, and select Menus. Here, you will see that you can add any of the pages you just created to your menu, which (depending on your theme) is typically displayed on your main header or in the sidebar.

 

You can nest whatever of the menu items you want by dragging and dropping it into position.

 

It is also possible to change names of menu items to whatever you want (without having to rename the linked page). Go into Menu Settings, and you can automatically add pages to the menu if that makes sense to you.

 

It is not necessarily a good idea to add pages automatically. That’s because you could end up with a lot of clutter. Probably you want certain pages to be especially prominent (e.g. About Us, Products or Services, etc.).

 

You may also have the option within your theme to change where this menu can be seen on your site.

 

How to make your non-blog WordPress site stand out

 

Probably, you do not want a mediocre non-blog WordPress site. You want a great one. Here are some tips on how to make it stronger from Alyssa Gregory on SitePoint:

 

  1. 1. Choose a strong theme. Gregory notes the importance of the theme in terms of how your content will be displayed. You may not want to have dates in your posts, for instance. Something with a magazine format will typically work well.
  2. 2. Figure out how pages and posts make sense. Gregory also mentions that you do not have to set up a non-blog WordPress site as a series of pages; you can use posts instead. However, using pages is more organized, from her perspective. If you do use posts or are dedicated to that structure for whatever reason, her advice would be to stick to it – because trying to create a hierarchy that crosses between pages and posts on a non-blog WP site could quickly get confusing. However, you can really use both in a meaningful way as long as the posts all appear within a certain setting, on their own distinct page (so you have someplace that you’re building content, even if it’s not the basis of the site). See below on that.
  3. 3. Dig into the code. Inevitably, the theme will need a little adjustment “under the hood.” That will allow you to clear out some of the more blog-centered elements that are built into the theme. An example would be when you turn off the ability to comment. You may still have a No Comments line in many themes, but that could be removed at the level of the code. It is usually also a good idea to clear out the RSS subscription option and anything else that is more of a reference to blogging than to a website without the blogging function.

 

The option of doing posts within their own page

 

You do not have to have a page for your Posts. However, if you do use the Posts on a non-blog site, you will want to organize them within a page so that the non-blog structure is the basis for everything. Actually, it does not hurt to create this page, says WPSiteBuilding.com, even if you don’t use it at this point. Generally, a blog is considered a good idea for search prominence and general engagement. This page could be called Blog or News or Thoughts or Updates, whatever you want. Just put a title, without anything on it. To test, publish that page.

 

Great hosting for strong WordPress UX

 

Are you wanting to deliver the best user experience through your non-blog WordPress site? At Total Server Solutions, we are always working to find the best, most effective ways to serve you and provide solutions to help you meet your challenges. Explore our platform.

Could IoT Botnet Mirai Survive Reboots

Posted by & filed under List Posts.

Mirai has been making a zombie army of swaths of the internet of things, so it is no wonder that manufacturers are taking steps to protect against it. However, one IoT device manufacturer’s failed attempt to protect its products against the botnet (used in massive DDoS attacks) shows how challenging this climate has become. When IoT-maker XiongMai, based in China, attempted to patch its devices so that the malware would be blocked, the result was described as a “terrible job” by security consultant Tony Gee.

 

Gee explained that he took products from the manufacturer to a trade convention, the Infosecurity Europe Show. The Floureon digital video recorders (DVRs) used in Gee’s demo did not have telnet open on port TCP/23 – but shutting off telnet access was insufficient as a defense.

 

Gee went through port 9527 via ncat. The passwords matched those of the web interface, and it was possible to open a command shell. Within the command shell, Gee opened a Linux shell and established root access. From the root user position, it was simple to enable telnet.

 

For devices that have telnet closed down, the device is hackable via shell and restarting the telnet daemon, explained Gee, adding ominously, “And we have Mirai all over again.”

 

  • Tale of an immortal zombie
  • How could Mirai grow larger?
  • The doom and gloom of Mirai
  • How to protect yourself from DDoS
  • Layers of protections
  • What this all means “on the ground”

 

Tale of an immortal zombie

 

Mirai is changing, much to the frustration of those who care about online security. Prior to this point, malware that was infecting IoT devices (such as routers, thermostats, and CCTV cameras) could be cleared away with a reboot.

 

A method was discovered in June that could be used to remotely access and repair devices that have been enslaved by the botnet. The flip side of this seemingly good news is that the same avenue is a way that a Mirai master can generate reinfection post-reboot – so researchers did not release details.

 

Notably, BrickerBot and Hajime also have strategies that try to create a persistent, “immortal” botnet.

 

The researchers did not provide any specific information about the vulnerability out of concern that it would be used by a malicious party. The firm did list numerous other weaknesses that could be exploited by those using the botnet.

 

How could Mirai grow larger?

 

What are other possible paths of exploit that would allow Mirai to grow even larger than it is now? Those include:

  • DVR default usernames and passwords that can be incorporated into the worm element of Mirai, which uses brute-force methods through the telnet port (via a list of default administrative login details) to infiltrate new devices.
  • Port 12323, an alternative port used as telnet by some DVR makers in place of the standard one (port 23).
  • Remote shell access, through port 9527, to some manufacturer’s devices through the username “admin” and passwords “[blank]” and “123456.”
  • One DVR company that had passwords that changed every single day (awesome), but published all the passwords within its manual on its site (not awesome).
  • A bug that could be accessed through the device’s web server, accessible through port 80. This firmware-residing buffer overflow bug currently exists in 1 million web-connected DVR devices.
  • Another bug makes it possible to get password hashes from a remote device, using an HTTP exploit called directory traversal.

 

The doom and gloom of Mirai

 

The astronomical expansion of Mirai is, at the very least, disconcerting. One recent report highlighted in TechRepublic found that Internet of Things attacks grew 280% during the first six months of 2017. The botnet itself is at approximately 300,000 devices, according to numbers from Embedded Computing Design. That’s the thing: Mirai is not fundamentally about IoT devices being vulnerable to infection. It’s about the result of that infection – the massive DDoS attacks that can be launched against any target.

 

Let’s get back to that infected and unwitting Frankenstein-ish army of “things” behind the attacks, though – it could grow through changes to the source code by hackers, updating it to include more root login defaults.

 

The botnet could also become more sophisticated and powerful as malicious parties continue to transform the original so that it has more complex capacities to use in its DDoS efforts. Today it has about 10 vectors or modes of attack when it barrages a target, but other strategies could be added.

 

How to protect yourself from DDoS

 

Distributed denial of service attacks from Mirai really are massive. They can push just about any firm off the Internet. Plus, the concern is not just about that single event of being hammered by false requests. Hackers first open up with a toned-down attack, a warning shot that is often not recognized as a pre-DDoS by custom in-house or legacy DDoS mitigation tools (as opposed to a dedicated DDoS mitigation service). These dress-rehearsal attacks, usually measuring under 1 Gbps and lasting 5 minutes or less, are under the radar of many DDoS protection solutions that have settings requiring attack traffic to be more substantial.

 

When DDoS started more than 20 years ago, engineers would use a null route, or remote trigger blackhole, to push the traffic away from the network and prevent collateral damage to other possible victims.

 

Next, DDoS mitigation became more sophisticated: traffic identified as problematic on a network was redirected to a DDoS scrubbing service – in which human operators analyzed attack traffic. This process was inefficient and costly. Also, remediation often did not get started right away following detection.

 

Now, DDoS protection both must be able to “see” a DDoS developing in real-time and have the ability to gauge the DDoS climate for trends, generating proactive steps to mitigate an attack. Enterprise-grade automatic mitigation protects you from these attacks and maintains your reliability.

 

Layers of protections

 

There are various levels at which distributed denial of service can be and should be challenged and stopped. First, a DDoS protection service against real and present threats, built by a strong provider, can effectively keep you safe from these attacks – but there are other efforts that can be made as well. Internet service providers (ISPs) can also protect the web by removing attack traffic before it heads back downstream.

 

Defense should really be at all levels, though. The people who make the pieces of the IoT – software, firmware, and device manufacturers – should build it with protections in place from the start. Installers and system admins should update passwords from the defaults and patch any intrusions as possible.

 

What this all means “on the ground”

 

It’s important to recognize that this issue is not just about security firms, device manufacturers, and criminals. It’s also about, really, all of us: the home users of devices, such as the DVR. (If you don’t know, a DVR is a device that records video on a mass storage device such as an SD memory card or USB flash drive… when it isn’t busy being used in botnet attacks).

 

The home user should be given reasonable security advice. Many users do not respond quickly when new patches are released. IoT devices are often built just strongly enough that they can operate; security is not a priority. That is bad – but that means users need to do their homework on security prior to purchase. They need to change the password from default to complex and randomized ones.

 

Protect yourself from Mirai

 

What can you do to keep your business safe from Mirai and other DDoS attacks? At Total Server Solutions, our DDoS mitigation & protection solutions keep your site up and running, your content flowing, and your customers buying, seamlessly. How does it work?

Mirai Botnet Master Bestbuy

Posted by & filed under List Posts.

 

Anonymity. It is a characteristic that is often not viewed positively. We all want to be recognized for our accomplishments and actions, our most impressive or good-hearted deeds. However, sometimes, we would prefer to remain in the shadows – and that’s especially true for the criminals among us; after all, their identification could lead to jail time and other unwanted consequences.

 

Well, if anonymity is what you want, you probably should avoid prominence in the DDoS community – or face the wrath of Brian Krebs. Krebs, who specializes in information security, seems to have gotten a knack lately for unmasking malicious online parties. An independent investigative journalist specializing in security, he is probably best known as the guy who was targeted with one of the biggest distributed denial of service (DDoS) events of all time – and responded by following a trail of data crumbs to identify the specific person he believed was responsible for the mega-attack.

 

Let’s briefly review the initial attack on Krebs (with a massive army of Mirai IoT devices) last September and the revealing of the Mirai author in January. Then we will double back to begin the Bestbuy story in November, when he (Bestbuy = Daniel Kaye) and another hacker (or simply another identity for Kaye himself) started taking control of the botnet. From there we will proceed to the downfall of Bestbuy: his arrest in February. Then we will go over Krebs correct identification of Kaye prior to the release of his name (another victory by Krebs that should be noted); and, finally, the controversial suspended sentence that he received from the German court, the precursor to a trial he is expected to soon face in England.

 

  • Bestbuy unmask prequel: Anna-Senpai
  • From hacker duel to handcuffs
  • Krebs fingers Kaye
  • How to protect yourself from DDoS

 

Bestbuy unmask prequel: Anna-Senpai

 

At approximately 8 pm EST on September 20, 2016, KrebsOnSecurity started getting hit with a blast of bogus traffic that measured at 620 Gigabits per second. Krebs had DDoS protection and his site was not pushed offline; however, it certainly got his attention. It ends up being a kind of battle in a DDoS v. Krebs war. After all, they targeted Krebs, many think, because of a previous event. On September 8, less than two weeks prior to his site being hit, Krebs named two Israeli hackers who were behind a very successful DDoS-as-a-service company that brought in $600,000 over two years; and the two men that he named in that piece (both just 18 years old) were arrested two days later.

 

Krebs noted that he thought the attack was probably a retaliation against that article, saying that freeapplej4ck was a string contained within some of the POST requests during the DDoS attack. This term was “a reference to the nickname used by one of the vDOS co-owners,” Krebs said.

 

It certainly seems that those behind this Mirai assault were gluttons for punishment, since Krebs had already proven himself adept at tracking down hackers. Fast-forward to January, and Krebs fingered Paras Jha, Rutgers University student and president of the DDoS mitigation service ProTraf Solutions, as the author of Mirai. (Note that Jha has not been charged with any crimes, as of July 28, per Krebs.)

 

From hacker duel to handcuffs

 

The security world became fixated on Mirai following this assault on Krebs, for obvious reasons. In November, Motherboard indicated that the attack on Krebs – followed up by ones on Spotify, Twitter, German ISP Deutsche Telekom, and other major services – was headed for even darker territory. Two hackers, or one with two identities, had created another enormous botnet using a variant of Mirai, and they were offering it as a pay service (similar to vDOS).

 

One of the two hackers (or the only one, if it is the same person) was better at bragging than he was at spell-checking; after telling Motherboard that he had more than a million hacked IoT devices under his control, he boasted, “The original Mirai was easy to take, like candy from this kids” [sic]. He was referencing the hacker battle to be the new godfather of all these compromised devices. One popular perspective at the time was that the fresh strain was created by a current Mirai botmaster in order to enslave additional devices to its army.

 

Unfortunately for Bestbuy, law enforcement was soon on his tail. In February, British police arrested a 29-year-old man at a London airport; however, notably, they did not release his name. The arrest was the first one related to Mirai. The German Federal Criminal Police Office (BKA) noted that the 29-year-old was being charged with an attack on Deutsche Telekom – soon after which Kaye/Bestbuy had messaged Motherboard that he was one of the people behind it.

 

“Bestbuy is down,” concluded Jack B. of the DDoS research collective SpoofIT at the time.

 

Krebs fingers Kaye

 

How did Krebs identify Bestbuy? Here are key points made to connect Bestbuy to Kaye:

 

  • When the Mirai botnet was used to take Deutsche Telekom offline, the registrant for the domain names affiliated with the servers controlling it were “Spider man” and “Peter Parker” (alter-ego of Spider-Man). The street address used for registration was in Israel.
  • The IP that is tied to the botnet that took the German ISP offline was 62.113.238.138. Only nine domains have ever been associated with this IP address. Eight of those domains were related to Mirai. The one that was not was dyndn[dot].com, a site that sold GovRAT, a remote access trojan (RAT) designed to log keystrokes. GovRAT has been used to attack over 100 corporations.
  • GovRAT was offered for sale by a user Spdr, with the email spdr01@gmail.com, on oday[dot]today.
  • Another malware service that was sometimes sold with GovRAT allowed people to fraudulently use code-signing certificates. Within the digital signature for that program was the email parkajackets@gmail.com.
  • The email addresses spdr01@gmail.com and parkajackets@gmail.com were the ones used for the vDOS usernames Bestbuy and Bestbuy2. (Remember Krebs’ article that identified the founders of that Israel-based DDoS-as-a-service ring.)
  • In addition to access from Israel, Bestbuy and Bestbuy2 logged into vDOS from Internet addresses in Hong Kong and the UK. Bestbuy2 actually only existed because the Bestbuy account was canceled for logging in from those international addresses.
  • A key member of the Israel-based IRC chat room and hacker forum Binaryvision.co.il had the email spdr01@gmail.com and was nicknamed spdr01.
  • Binaryvision members told Krebs that spdr01 was about 30; had dual citizenship in the UK and Israel; and was engaged.
  • The Binaryvision users’ social accounts were both connected to a 29-year-old man named Daniel Kaye. Kaye’s Facebook profile had the alias DanielKaye.il (using Israel’s top-level domain) and was engaged to marry a British woman named Catherine. The profile photo is of Hong Kong.
  • Daniel Kaye is listed as the registrant for Cathyjewels[dot]com, and the email address used for that domain was danielkaye02@gmail.com.
  • On Gravatar, the account Spdr01 uses the email address danielkaye02@gmail.com.

Following Krebs’ story, he was proven right: Bestbuy said in court that he was responsible for attacked Deutsche Telekom using Mirai. Then, on July 28, Krebs wrote, “Today, a German court issued a suspended sentence for Kaye, who now faces cybercrime charges in the United Kingdom.” Notably (given the slap on the wrist from Germany), Kaye is expected to be extradited to the UK to face criminal charges there.

 

How to protect yourself from DDoS

 

The Mirai botnet is fascinating from the perspective of a mystery or web of information. However, it is not exactly fun to be hit with a massive barrage of bogus requests from an army of zombie routers. Is your company safe from DDoS? At Total Server Solutions, our DDoS mitigation service isolates attack traffic and allows only clean, inbound traffic to pass through to your server. Safeguard your site.

Why You Should and Shouldn't Use Colo

Posted by & filed under List Posts.

What if you could somehow pass on your server room responsibilities to someone else? How would it feel to get access to the network power, performance, and staff of a huge enterprise? If you replied, “That would be awesome,” to either of the above questions, colocation may be the right choice for you. Let’s first explore what this IT approach is and an overview of the current market before looking at key elements within the industry (to better understand what is impacting providers), and a list of reasons companies take this route.

 

  • Understanding Colocation
  • Changing Elements of the Colocation Industry
  • Top Reasons for Choosing Colocation
  • How to Approach Colocation Smartly
  • Moving Forward

 

Understanding Colocation

 

Colocation is leasing space in an outside data center for your servers and storage – with the owner of the facility meeting your needs for a secure physical location and internet connection. Unlike with cloud hosting, all of the hardware in a colo relationship is owned by you. This arrangement is attractive to many companies because of basic economies of scale: you can access a highly skilled staff, improve your bandwidth, bolster your data safety, and access more sophisticated infrastructure. Your bill is basically a combination of rack space and some degree of maintenance (often minimal).

 

Changing Elements of the Colocation Industry

 

Colocation providers are entering a trickier landscape as the market gets hotter. Buyer personas are proliferating; sustainability is becoming a greater concern; cloud hosting is on the rise; and computing strategies are becoming increasingly diversified and complex. Just how hot is colocation getting? With a 14.4% CAGR between 2011 and 2016, the industry is a bit steamy. (But don’t worry: enterprise-grade, multiple redundant cooling systems ensure that your hardware will never sizzle.)

 

Put another way, colocation may not be as trendy a concept as cloud, but the former is more widely used by enterprises than the latter. According to Uptime Institute’s 2017 Data Center Industry Survey, 22% of enterprise IT systems are housed in colocation centers, while 13% are cloud-based. Plus, cloud is growing in line with cloud: according to figures highlighted in Virtualization Review, that 35% external data center total is expected to rise to 50% by 2020.

 

Related: “How to Use Colocation to Your Advantage”

 

As the market continues to develop, colocation vendors must have the agility to reshape themselves in response while also looking for ways to build their own business by incorporating breakthrough equipment and strategies, and by continuing to focus on operations, affordability, and performance.

 

Here is a look at some of the key aspects of the market that are evolving, keeping life interesting for those who work at colocation providers:

 

Who buys location? In the past, people in facilities or procurement roles would typically be the ones engaging with colocation vendors. Now, though, choices on infrastructure are being handled by a broader group that includes line-of-business and C-level management. Since colocation firms are now interacting with more COOs, CFOs, and heads of business units, it is increasingly important that they are prepared, both from sales and business perspectives, to “talk shop” meaningfully with individuals from a multifarious array of backgrounds.

 

How is DCIM used? Both internally and as a value-added service, data center infrastructure management (DCIM) software is becoming a more central function in colocation facilities. DCIM bolsters service assurance while leading to better consistency across analytics. It allows companies to convert their data into actionable metrics and gives infrastructure executives insight into speed and reliability throughout the scope of systems, for more accurate, knowledge-driven decisions. These gains lead to a less expensive, more highly available, and more efficient ecosystem.

 

How is the data center designed? The way that a data center is laid out must make way for cloud hosting, edge computing, and other growing methods. Because methods are in rapid flux, adaptability must be built into architecture. Flexibility makes it possible to pivot to meet different applications and needs. On the flip-side, what colocation centers do not want is minimal service options or stranded capacity. Addressing these issues requires a sustained focus on density and the support of mixed-density rows. Right-sizing can be achieved through modular design so that colocation firms do not overprovision from the outset. These vendors must think about the extent of resiliency that they want to implement and how far to go in that direction – keeping in mind that high resiliency, like high density, is expensive. Additionally, safety must be considered as an element of design, especially since higher density, in and of itself, poses a greater risk to staff.

 

Top Reasons for Choosing Colocation

 

The primary reason companies feel hesitant to choose colocation is a sense that they will lose control. Anyone who chooses this route knows they are handing their servers over to someone else.

 

Well, so then why do people do it? For one thing, yes, you lose day-to-day control of your servers in a physical sense, but you do retain much more control over them than in many hosting scenarios (most notably cloud, since that option is often juxtaposed with colocation). It is still your equipment and your software.

 

Beyond that, reasons vary. Small and midsize businesses can use it to affordably access a more sophisticated computing environment than they have onsite.

 

Another key, organization-nonspecific reason that colocation is used comes from Michael Kassner of TechRepublic: “[M]ost managers said their colocated equipment was mission critical, and the colocation providers were able to meet their requirements at a lower cost than if the service was kept in-house.” Sounds simple enough.

 

Here are a few additional ideas from Susan Adams of Spiceworks on the advantages of entrusting your servers to a colocation facility:

 

  • Improved physical security (think access logs, cage locks, and cameras)
  • Helpful support (well, if you’ve chosen the right provider)
  • Better uptime, since you’re getting access to cutting-edge uninterruptible power supply (UPS)
  • Better cooling so that your hardware gets better care
  • Scalability, since all you have to do is send the data center more machines
  • Connections with various major ISPs through dedicated fiber.

 

Colocation is often more cost-effective than using your own datacenter since the amount you get billed is inclusive of HVAC costs and power. “Even without those cost savings, though,” says Adams, “you’re paying for the life-improving peace of mind of an enterprise-quality, stable, and fast data center.”

 

How to Approach Colocation Smartly

 

How can you succeed with this infrastructural method? First, be prepared by understanding that you need to start with your own software and servers.

 

Once you have all your machines ready, Adams advises to start monitoring resource consumption so that you successfully stay within any resource limits related to your plan.

 

Also, switching from your own datacenter to colo involves various moving parts. Build in extra time, and be prepared for potential snags so that everything proceeds forward smoothly.

 

Finally, in order to facilitate usability, you want to have a strong connection to the colocation facility – as can be achieved with a Border Gateway Protocol (BGP) circuit and BGP tail.

 

Moving Forward

 

Are you considering colocation for your infrastructure? At Total Server Solutions, all of our datacenters are robust, reliable, and ready to meet your challenges. Discover our reach.

How to Use Colocation to Your Advantage

Posted by & filed under List Posts.

 

Let’s look at the colocation market and a few statistics; talk about why businesses are choosing colocation (i.e., the problems it addresses); and finally, review 10 strategies to select the best colocation provider.

 

What does the move off-premises look like?

 

The amount of computing workloads that are handled onsite has hovered at approximately 70% for the last few years, but research suggests cloud and colocation will be responsible for a greater share in the years ahead.

 

According to the Uptime Institute’s 2016 Data Center Industry Survey, fully half of IT decision-makers predict that most of their computing will eventually occur through a third-party facility. Among those, more than two-thirds (70%) say that they expect off-premise to outdo on-premise by 2020.

 

A substantial portion of the transition to external providers is headed for public cloud. However, many businesses will also be switching over to colocation, or colo – the rental of space within an external data center for a business’s own servers and hardware. This practice is called “colocation” (co-) because it collaboratively meets the business needs of the client: you provide the servers and storage, while the vendor provides the facility, physical security, climate control, bandwidth, and power.

 

Colocation vendors have been expanding. That’s evident with statistics from business intelligence firm IBISWorld that reveal a compound annual growth rate of 14.4% from 2011 to 2016 (with a total size of $14 billion).

 

Why do businesses choose colocation?

 

Here are some of the most common reasons businesses use colocation, according to senior IT executives:

 

  • Worldwide growth
  • Challenges related to mergers or acquisitions
  • Migration of systems that are not core business
  • Leadership instructions to move off internal hardware
  • Save the cost of building a new data center
  • Limit churn from noncritical computing into critical systems
  • Use of a different power grid for disaster recovery
  • Unsureness about in-house resources or staff.

 

Related: “Why Colocation?”

 

Michael Kassner of TechRepublic lists several other reasons for this practice that get a little more granular:

 

  • Cost-effectiveness – Because data centers can get volume deals on internet access and bandwidth, you can save on those costs.
  • Security – If an organization does not have an IT staff that has some security expertise, the colocation facility is better for data safety.
  • Redundancy – The amount of backup is expanded in terms of both power and the network. A business might have its own generators in the case of outages for uninterruptible power, but they often will not have diversified their internet connections with various vendors.
  • Simplicity – You own the software and hardware, so the businesses are able to update these components as needed without having to renegotiate with the vendor.
  • No more “noisy neighbors” – If you don’t have guaranteed resources in a VPS or cloud hosting plan, you can end up with other tenants hogging the resources (CPU, disk I/O, bandwidth, etc.), hurting your performance.

 

10 tips to select a strong colocation vendor

 

Any company that is using colocation is using some of its budget for data center capacity from an external party. Since that’s the case, they are entitled to expect that their vendors operate with at least as high of standards as they apply in-house. The brokering of services generally has become a more important skill for CIOs; as for colocation, the assessment, contract structuring, and management of these partnerships will become increasingly critical to the success of an IT department.

 

Here are tactics to make sure colocation works right for you (and you’ll notice that many of these direct questions cover similar ground to the above listed general reasons):

 

#1. Prioritize physical location. Yes, you want to be able to get to the facility easily for physical access; plus, be aware that data replication and network latency will be simplified and improved by relative proximity.

 

#2. Confirm third-party verification. You need to know that availability is fundamental to the infrastructure that you’re using. Make sure there is documentation to back up any claims made by the vendor about their ability to meet Statement on Standards for Attestation Engagements No. 16 (SSAE 16) or other key industry standards. If your systems are mission-critical, get evidence from the provider.

 

#3. Check for redundant connectivity. Note that redundancy is a key reason why colocation is a strong option. You want to make sure of the existence of connectivity backups. Reliability of these internet connections is also crucial.

 

#4. Look for commitments to security & compliance. Security should be a major concern of any data center, but verifying that commitment is a major concern for you. You also have to check that the vendor meets your regulatory requirements so you are protected and aren’t blindsided by violations.

 

#5. Review how the vendor will provide support. You need to make sure your needs are met both in terms of the hardware and support, as should be spelled out in the service-level agreement (SLA).

 

#6. Get a sense of business stability. Matt Stansberry of the Uptime Institute advises looking for a colocation facility that has been running for a number of years, by the same organization, with a consistent group of providers and clients. In other words, you do not want moving pieces but stability. Problems are likelier to arise when the vendor you choose gets acquired by another organization, reinstalls hardware, adjusts its operations, or is consolidating equipment. To gauge this aspect of the business, ask about the data center’s hardware lifespan, occupancy rate, and even employee turnover. Does the average staff member have a long tenure? Why not? And if the hardware is aging, do not be surprised if the firm is gearing up for potentially problematic upgrades.

 

#7. Assess the scope of services offered. Ideally, the vendor will provide a range of services. It may sound irrelevant to your specific and immediate concerns of getting your equipment colocated. However, diversity of offerings means that you can adjust if your organization’s needs change without having to go through the process of vetting a new provider again.

 

#8. Make sure that cooling and power are guaranteed. The SLA should ensure that power and backup power will be in place without exception.

 

#9. Confirm that operations are aligned with your expectations. You are likeliest to experience downtime when errors or oversights are made in operations. You will not always be able to get full paperwork (maintenance records, incident reports, commissioning reports, etc.), but getting what you can will give you a more transparent window into how things run at the vendor.

 

#10. Generally improve your RFPs and SLAs. Make sure terms are established well within an RFP or SLA. Specific ideas to enhance your effectiveness with these documents from the Uptime Institute Network include: 1.) staying brief (2-3 pages) so that potential vendors don’t feel overwhelmed by a massive document; 2.) remembering that due diligence must occur regardless what brands are currently using the vendor; and, 3.) keeping overprovisioning at bay by questioning hardware faceplate data and assuming excessive impact from an equipment refresh.

 

*****

 

Are you looking to make the most of colocation as a strategy for IT at your business? The above considerations can guide you in the right direction. At Total Server Solutions, we meet the parameters of an SSAE-16 Type II audit; but our service is what sets us apart, and it’s our people that make our service great. Download Our Corporate Overview.

Get Started with the Internet of Things

Posted by & filed under List Posts.

Strategizing a conscientious plan will help you launch into the internet of things without any hitches along the way. Here, we look at three methods or best practices that seem to be held in common by the most successful IoT adopters, as indicated by an MIT overview. First, though, we assess statistics on the scope of the IoT and its general business adoption rate.

 

Is the internet of things on the rise? Well, considering recent IoT market statistics, the answer is a confident “yes”:

 

  • The total market size of the IoT will increase from $900 million to $3.7 billion between 2015 and 2020 (McKinsey).
  • The number of devices that make up the IoT will expand from an installed base of 15.4 billion to 30.7 billion by 2020, and on to 75.4 billion by 2025 (IHS).
  • IoT hardware, software, and business service providers will have annual earnings greater than $470 billion by 2020 (Bain).
  • Over the next 15 years, the total money that will be injected into the industrial IoT will be more than $60 trillion (General Electric).

 

Despite these numbers, and even though the internet of things is generally a subject of widespread attention, many companies have still not launched an IoT project. A report from the MIT Sloan Management Review published just nine months ago revealed that the majority of companies responding to their international survey (3 in 5) did not currently have an IoT project in place.

 

However, as Stephanie Jernigan and Sam Ransbotham note in the journal, the flipside is that 2 out of every 5 organizations are moving forward with IoT. The important thing, then, is to figure out what can be learned from the early adopters.

 

How do you move forward with successful IoT?

 

Here are the three best practices that seem to differentiate the most strongly successful adopters of the internet of things from the ones who didn’t fare as well, according to the researchers:

 

#1 best practice – Think big, but act small.

 

When businesses succeed with their first attempts at the IoT, they don’t get too grandiose with its scale. They select a direction that does not stretch the budget and does not employ excessive devices. A key project mentioned by the researchers is the Array of Things (AoT), a network of sensor-containing boxes currently being installed throughout Chicago to gather and analyze real-time data from the infrastructure, environment, and movement for public and research applications. AoT “will essentially serve as a ‘fitness tracker’ for the city,” notes the project’s FAQ page, “measuring factors that impact livability in Chicago such as climate, air quality and noise.”

 

Reliability is essential because maintenance is a particular challenge of IoT projects such as this. The MIT research team notes that the AoT has been moving slowly with the launch specifically because they need to make sure they know exactly what the reliability of nodes is. According to the University of Chicago, the first 50 of a total 500 nodes were installed in August and September 2016. The project continues to work in stages through its completion, with all nodes set to be in place by December 2018.

 

There is another side to size with IoT too. You don’t just have to take care of the devices but the interpersonal connections that are impacted through these means. Companies studied by the MIT researchers typically focused on a single group or smaller group of people (rather than all the company’s points of connection), making the project easier to control from a relationship perspective.

 

A benefit of starting small and more niche is that you are less likely to create a headache for yourself in terms of integration moving forward.

 

#2 best practice – Embrace both short-term and long-term vision.

 

Jernigan and Ransbotham advised first coming up with use cases that might be worthwhile for your firm and then calculating the ROI from each of them. To a great extent, you should be able to come up with numbers associated with the project. Executives that replied to the MIT poll said that they had been able to come up with specific numbers showing the advantage of IoT via:

 

  • Rise in earnings (23%)
  • Rise in supply chain delivery or accuracy (20%)
  • Drop in fraud or other crime (16%)
  • Rise in harvest or manufacturing yields (15%)

 

The respondents said that these were each reliable ways to gauge effectiveness.

 

However, it is not enough to simply think in terms of what’s happening right now. When you move forward with the internet of things, it’s important to think about how the insight from the current project can be reintroduced to something more expansive. The MIT scholars note that some enterprises have started out collaborating on the Array of Things before jumping into other ventures.

 

Once you have your own internal project going, you will quickly think of other applications, says Silicon Labs IoT products senior VP Daniel Cooley – how you can put the data from the devices to the best possible use. “[S]omeone puts this wireless technology in place for a reason[,] and then they find different things to do with that data,” he says. “They very quickly become data stars.”

 

#3 best practice – Keep looking at different options.

 

It is key that you are able to see an obvious ROI from your internet of things project, that the data is needed, and that you are gathering it by the best possible means.

 

Nearly two-thirds of those surveyed by MIT (64%) said that they could not get the results that they have achieved with the IoT in any other way. The reason that the Array of Things took form is that the Urban Center for Computation and Data wanted to be able to answer questions about city concerns through data. Realizing that they did not have all the information they needed, they had to think about their options.

 

For instance, the UrbanCCD wanted to analyze asthma rates to see how it related to traffic and congestion levels in certain neighborhoods. Leadership at the organization started to think that sensors, connected to the web and distributed throughout the streets of Chicago, would be the ideal way to get reliable information directly from the source. Jernigan and Ransbotham noted that the scientists at the center did not immediately gravitate toward the IoT. Instead, they had a problem, and setting up IoT sensors was the most reasonable fix.

 

The MIT team highlights a number of other key findings about the internet of things:

 

  • Companies with advanced analytics skills have more than triple the chance of deriving value from the internet of things than firms with less developed skills in that arena.
  • The IoT ties together devices, but companies as well. This fact “necessitate[es] managerial attention to the resulting relationships,” say Jernigan and Ransbotham, “not just technical attention to the devices themselves.”
  • The IoT ties firms to government agencies and other industry players in addition to their customers and vendors.
  • Generally, a large economy of scale is a good thing. That’s not the case with the internet of things, though. It’s often possible that expenses grow faster than the network of devices.
  • The internet of things is based on sophisticated bases, including its technical infrastructure and analytics; and it amplifies these complexities.
  • The advantage of the complexity is that those who thrive on contemplating different processes and systems are awarded.

 

*****

 

Do you want to experiment with the internet of things? Note the emphasis on technical infrastructure as a foundation for an enterprise-grade internet of things project. At Total Server Solutions, our High Performance Cloud Platform uses the fastest hardware, coupled with a far-reaching network. Build your IoT project.