Posted by & filed under List Posts.

Total Server Solutions Cloud

 

Data loss is a major problem, as indicated by an analysis of the raw information used for scientific studies. Businesses are at risk of data loss too, of course. Let’s look at how costly losing your data is and specifically assess the role of multiple disparate locations.

  • Science suffering huge data loss
  • Typical SMB scenario
  • Most enterprises lose data every year
  • Impact of location on business data loss
  • Store data in 3 or more locations
  • Cloud hosting for data loss prevention

 

Science suffering huge data loss

A disturbing article was published in the academic journal Nature in 2013. Elizabeth Gibney and Richard Van Noorden revealed that it’s possible as much as 80% of scientific data could be lost by 2033. Why, in the era of cloud computing and disaster-recovery-as-a-service, should this be happening?

Professors and other researchers admitted that they have research data in many different odd places, such as attics, garages, and even on obsolete floppy disks. Because physical information is often hidden away like that in inaccessible locations, science is losing information at a fast clip.

A review in Current Biology wanted to track down the raw information for 516 ecology studies published from 1991 to 2011. The scientists directly contacted the study authors; as indicated above, the findings were disturbing. “[W]hereas data for almost all studies published just two years ago were still accessible, the chance of them being so fell by 17% per year,” explain Gibney and Van Noorden. “Availability dropped to as little as 20% for research from the early 1990s.”

The solution to this data loss problem, like any, is simple: additional copies, placed in geographically diversified locations. In other words, using cloud hosting powered by a CDN.

Typical SMB scenario

Are you under the impression that merely backing up your company’s information is a solution in and of itself? As indicated by the loss of scientific data, it’s critical that your data is backed up in multiple locations. In this way, geographic diversification is a critical concept for all organizations, in terms of disaster recovery and business continuity.

Let’s look at this from the SMB perspective. You are a small business, and you haven’t yet set up a cloud backup system for your company. Your business goes underwater in a flash flood. You have insurance, allowing you to rebuild. The problem is that your data backup is only 20 minutes away, and that facility is flooded too. You don’t have insurance on that information – and it could be your most valuable asset.

Let’s look at how costly losing your data can be and the role of geographic diversity in lowering risk to your business.

Most enterprises lose data every year

There are two basic possibilities if you suffer extreme data loss: you can either recover through your IT team or other specialists, or the data is completely gone. Just to look at the general overall numbers though, data loss and downtime together cost businesses a massive amount of money each year.

A 2014 study, cited by Eduard Kovacs in SecurityWeek, collected responses from 3300 IT leaders in two dozen different nations. The analysis revealed that enterprises (ie, companies that have at least 250 on staff) lost an incredible $1.7 trillion over the course of the previous year to loss of data and downtime. While there were fewer situations in which data was lost compared to 2012, the sheer amount of data that was destroyed grew by 400% over that same period.

Furthermore, most enterprises lose data annually, according to the study. In the previous year, nearly two in three enterprises (64%) had experienced downtime or data loss within the preceding twelve months. Downtime averaged 25 hours. Just over one-third either took a financial hit (36%) and/or suffered developmental setbacks (34%).

Impact of location on business data loss

Understanding the generally high cost of data loss, let’s move back to the environmental discussion. The fact is that flash floods, tornados, fires, and other natural disasters derail businesses frequently. Here are the top reasons why a company might lose its information, according to a 2015 survey highlighted by Timothy King in Solutions Review:

  1. Hardware or datacenter failure – 47%
  2. Environmental disasters – 34.5%.

Knowing that data is often lost to natural events, the best advice to outfits that are establishing disaster recovery plans is to make sure their data is geographically distributed, says King. “Given that, it would seem obvious that organizations would move in that direction…, in order to apply further safeguarding to their data,” he adds. “Unfortunately, this isn’t the case.”

Where is data backup typically located? For many organizations, it is in close proximity to their business. These portions of different business categories had their backup within 25 miles of their central location:

Government – 46%

Academic – 27.5%

Non-profit – 23%

Private-sector business – 16%.

When a company has its backup within 25 miles, there’s a good chance (although not always of course) that a natural disaster would affect both of the locations. Looking at that possibility, clearly the backup location could not serve its function.

That’s a big problem, says King. “The point to having a secondary server or data storage site is to avoid catastrophes that occur at the main site,” he says. “By having them so close together, the backup becomes almost worthless.”

Store data in 3 or more locations

Your data recovery plan should incorporate onsite backup, offsite backup, and online backup in at least three geographically diverse places, argues Zaid Ammari in Tech.Co. That way your chances of downtime and disruption in customer service are significantly reduced.

While many businesses have multiple redundancies for data, again, the issue is that their backup systems are located nearby – creating high risk. The chance of losing a backup that’s local is .25%. That may sound very low, but it’s “still too risky for critically important proprietary information,” says Ammari. Store data in three locations, on the other hand, and “the probability of data backup survival rises to 99.99 percent, virtually ensuring that the company’s data will remain completely intact regardless of the situation.”

Here are five top points that are often mentioned by security thought-leaders related to disaster recovery and data protection:

  1. Figure out your threats. How might your information be at risk? Know possible problems such as a breach, accidental deletion, file corruption, or a natural event.
  2. Audit what you have. Knowing how to preserve your data in part depends on knowing where it is and who can get to it.
  3. Decide when redundancies are needed. Obviously back up anything mission-critical. Anything that is proprietary or contains sensitive information is certainly high-priority too.
  4. Determine your locations for backup. You want to have a minimum of three diversified off-site locations for your data, notes Ammari. “Files are less likely to be compromised if there are multiple copies stored on various media,” he says. “If a disaster strikes, duplicate copies in separate and distinct locations can help prevent a permanent data loss.”

Cloud hosting for data loss prevention

Are you wanting to better protect your business from data loss through geographic diversity? Distribute your data geographically through cloud hosting. At Total Server Solutions, our cloud uses the fastest hardware, coupled with a far-reaching network. Learn more.

Posted by & filed under List Posts.

One of the best traits that you see touted about cloud hosting is that it offers a distributed network of servers, leading to no single-point-of-failure (SPOF). What are SPOFs, what do they look like in a datacenter, and how can you avoid them on your team and in your technology? Through this discussion, we can better understand how to avoid SPOFs with personnel and why businesses value the anti-SPOF distribution of cloud technology.

 

  • SPOF – what is it?
  • SPOFs in the wild
  • Check-list to SPOF-proof your team
  • Step #1. Figure out who your SPOF people are.
  • Step #2. Think about how to rectify your SPOFs.
  • Step #3. Create redundancies to mitigate the SPOFs.
  • Step #4. Allow your development plan to serve as guidance.
  • Cloud hosting and single points of failure

 

TSS Cloud

 

SPOF – what is it?

You might have heard the term single point of failure (SPOF) in passing but not know the exact meaning. Clearly since it has to do with failure, it’s a key topic in networking and something that every datacenter manager wants to avoid as a top operating priority. Specifically, a SPOF is a vulnerability, arising because of a mistake in the way a system or circuit is set up, deployed, or designed, that makes it possible for one fault to crash the whole system.

 

SPOFs in the wild

If a SPOF exists in a datacenter, it means that the data or certain services can become unavailable just because of a seemingly isolated malfunction. In fact, explains Stephen J. Bigelow in TechTarget, the datacenter can completely go down, if the interdependencies and location are mission-critical enough. “Consider a data center where a single server runs a single application,” he says. “The underlying server hardware would present a single point of failure for the application’s availability.”

 

Think about it: it’s just like a PC that isn’t back up in any way. If the computer dies or gets hacked, that SPOF means you’ve lost all your files. In a similar way, if that solo server goes down, the app either becomes unreliable or goes down with it. People become unable to get into the program. Data could be lost as well, which is both highly frustrating and highly expensive. A basic idea on the floor in the datacenter is to cluster servers so that more than one copy of the program is running; at least one additional server is used in this scenario.

 

If the original machine goes down, the additional one jumps in so that users are able to keep using the app. That simple anti-SPOF technique (and you can get much more complex, of course) essentially means that you can hide a failure behind the scenes, allowing users to seamlessly transition to the new server, unaware of any issues (as occurs standardly in cloud hosting environments, invisible to the end-user).

 

Looking at single-point-of-failure from a different angle shows us how broad this challenge is. Bigelow gives the example of one network switch that supplies networking for an array of servers. That is a SPOF. “If the switch failed (or simply disconnected from its power source), all of the servers connected to that switch would become inaccessible from the remainder of the network,” says Bigelow. “For a large switch, this could render dozens of servers and their workloads inaccessible.” By building in multiple redundancies, in the form of additional network connections and switches, you allow your machines access to a different pathway if a malfunction takes place. That, again, is a basic anti-SPOF method.

 

Someone who is the engineer of a datacenter is tasked with locating and fixing any SPOF instances within the system, at any level. Now, keep in mind that the head of infrastructure cannot properly create the flexibility and redundancy needed without a reasonable budget. Obviously in the situations listed above, you have to pay for the extra physical servers, switches, cables, and network connections. Anyone who is the architect of a datacenter or any system should consider how mission-critical a workload is against the price to rid the system of all possible SPOFs. In most cases, not every system is mission-critical. There are situations in which an architect might decide that it makes sense to intentionally disregard the SPOF and save the money.

 

The other option is to go with cloud hosting to get rid of single points of failure through a broad distribution of servers. Before we get into cloud, though, let’s look at SPOF-proofing your staff.

 

Check-list to SPOF-proof your team

You know to remove single points of failure from your systems, but you may not think to do it with your people as well. That’s important too, advises Tomas Kucera of The Geeky Leader. One of the most often overlooked tasks of any leader is to plan his succession and to ensure he has a plan how to ensure his team works even if he loses a key contributor,” he says. “We are all so submerged in the daily tasks that we often don’t realize that we fail to make the team resilient and disaster-proof.”

 

Here are a few tactics you can use to remove single points of failure within your staff:

 

Step #1. Figure out who your SPOF people are.

Which people within your company are mission-critical? Now, it may seem obvious to point to C-level executives or other leaders. Keep in mind that directors are sometimes easier to replace than others. Really review your people with a few tough questions:

 

  • Does the individual “hold a unique knowledge?” Kucera says to ask yourself. This insight could mean “institutional knowledge, technical or just knowing lots of people that are key to your team survival and no one else knows them or has that knowledge,” he adds.
  • Do they have capabilities that are difficult to replace? That could be a top salesperson, someone who’s a big source of mentorship, or a business negotiator who keeps down your costs and gets you what you need to excel.
  • Is the individual fulfilling a specialized role that is essential to the seamless viability of your team? This person could serve in some ways as a leader even though their official role might not be executive. It can also be someone who’s pleasant or funny and helps with morale.

 

Step #2. Think about how to rectify your SPOFs.

Just like with a single point of failure within an IT system, you must have a mitigation process to remove single points of failure from your staff. However, when it comes to people, your solutions won’t be cookie-cutter.

 

Consider how the flow and/or culture of your workplace might change if each SPOF person were to quit or otherwise stop showing up to work. What types of insights, capabilities, or roles would need to be filled by another party? How would the business be harmed, on any level (internally and externally)? Think about today and about next year.

 

As you consider these VIP people, look also at your entire workforce. Are there colleagues who share some of the rare qualities of the SPOF? If not, you need redundancies; it might be a good idea to hire.

 

Step #3. Create redundancies to mitigate the SPOFs.

Did you find a colleague who might be a reasonable backup person? You need to make sure that second person is trained as a potential replacement. Create and closely monitor a development plan to share knowledge as an anti-SPOF maneuver.

 

Step #4. Allow your development plan to serve as guidance.

You want this development plan to be central to your overall team’s development. Are you going to give someone a new set of work? Are you considering your organizational structure? The SPOF development plan should be reviewed. Any time you make adjustments, try to eliminate SPOF instances. Generally, make sure you aren’t assigning everything to the same top individuals. When you rely heavily on a few people, they become single points of failure. It makes the organization less flexible and more vulnerable.

 

Moving forward, update the list every few months.

 

Cloud hosting and single points of failure

A strong cloud hosting infrastructure is decidedly built to be anti-SPOF. Single points of failure no longer need to be part of your company’s technological foundation. At Total Server Solutions, our cloud uses the fastest hardware, coupled with a far-reaching network, making everything easier and SPOF-free. We do it right.

Posted by & filed under List Posts.

 

<<< Go to Part 1

 

  • Tips & issues when adopting SSL (cont.)
  • Do the benefits of site-wide SSL outweigh the issues?
  • Extended validation SSL: what is it?
  • Other takeaways from site-wide SSL experiment
  • Netcraft SSL Survey – brand popularity
  • Market share of SSL certificates
  • Validation categories as percentages of the market
  • Securing your site with SSL
AISLE-1

 

Tips & issues when adopting SSL (cont.)

Speed: The encryption and accessing of the key that is necessarily part of a private connection is going to slow down your site a bit. You can implement SPDY (an open source protocol developed mostly by Google) to adjust the processing of HTTP traffic for a little acceleration; however, the latency is something you want to consider in contrast to the obvious advantages of site-wide SSL.

 

Do the benefits of site-wide SSL outweigh the issues?

Clearly site-wide SSL is not entirely positive. However, here are a few reasons it makes sense regardless of the challenges it presents, from Web developer Andrea Whitmer – and these are simply effects she noticed from a case study of her own site:

 

  1. Her bounce-rate went down, she assumes because people immediately trusted her site more. Now, let’s not gloss over this detail. A reduction in bounce rate is important – it’s a factor typically listed in top 5 and top 10 lists of key metrics for online success. (In fact, Tony Haile of Chartbeat says that 55% of visitors will spend 15 seconds or less on your site.)
  1. There were fewer questions from people related to payment. In other words, people moved more seamlessly through the sales funnel.
  1. The process was helpful simply in terms of testing.

 

It is also worth noting – actually it’s very important – that the type of SSL certificate Whitmer was using was an extended validation (EV) cert. Let’s address what an EV certificate is briefly.

 

Extended validation SSL: what is it?

OK, so a secure sockets layer certificate will encrypt transmission on pages of your site where it is implemented, but it does something else: validates the website owner for better credibility. That’s why an extended validation certificate is often sought by site owners. It isn’t valuable for a higher degree of encryption but for a higher degree of validity and, in turn, trust.

 

This is visual and obvious. If you have ever been to any site that has EV active, such as PayPal, you will see the address bar turn green and the name of the verified company appear in your browser. These elements are additional to the lock symbol and https protocol. There are numerous case studies by Symantec and others, but the positive impact should be obvious just considering buyer psychology and the importance of online trust. Here’s an example: Overstock.com saw an 8.6% reduction in shopping cart abandonment in a Symantec case study.

 

However, there is another aspect that is helpful as well, according to the nonprofit Certification Authority Browser Forum (CA/B Forum) – the industry group that defines extended validation parameters. “The secondary objectives… [of certificates] are to help establish the legitimacy of an entity claiming to operate a Web site,” says the organization, “and to provide a vehicle that can be used to assist in addressing problems related to phishing, malware, and other forms of online identity fraud.”

 

Specifically related to phishing, consider this: if a site uses phishing and accurately mimics your site to steal your or your customer’s information, the green address bar and business name supplied by an EV SSL may be the only way for someone to tell it’s your site. What that means is that you could prevent phishing attacks, one of the major forms of online fraud, by instructing users (perhaps through a notice on the site) to only proceed if they see the EV indicators populate.

 

Other takeaways from site-wide SSL experiment

Whitmer notes that she was at first skeptical about whether site-wide SSL would help in the search engines (since it does improve your search rankings, according to Google itself) because there weren’t immediate improvements as she’d thought there would be. Nine months after she transitioned, she had much better search traffic than she did previously; but she points out that there were many other changes to the site made in the meantime that could have also boosted her rankings.

 

All in all regarding search rankings, she said that it could be a good tactic if you set it up in the right way – although this aspect obviously isn’t a benefit that she can strongly argue.

 

In closing, Whitmer does advocate site-wide SSL for anyone with a site that is similar to hers. “For me, sitewide SSL has been worth the effort because of my future plans for my business,” she says, “as well as the current pages on my site using forms to collect information from visitors.”

 

Netcraft SSL Survey – brand popularity

As touched on in the first part of this piece, Netcraft conducts a monthly SSL Survey, assessing the number of SSL certificates that exist on public-facing websites. Again, the numbers from its survey account for the total number of certificates – not taking into account that the same cert is sometimes used on multiple sites (which creates browser errors anyway and is not considered valid use).

 

Market share of SSL certificates

As of January 2015, nearly one-third of SSL certificates were Symantec brands (Symantec, GeoTrust, Thawte, or RapidSSL). GoDaddy was in the second position, and Comodo in third. Those three SSL providers supplied the vast majority of certificates – accounting for greater than 75% of the market. Other brands followed in this order: GlobalSign, DigiCert, StartCom, Entrust, and Network Solutions.

 

Note that all of the certificates we sell at Total Server Solutions are from the industry’s most trusted brand, Symantec.

 

Validation categories as percentages of the market

There are three types of assurance that are standardly recognized within the industry – and as such, supported with the given parameters of the validation type by all the major browsers (via their agreements within the CA/B Forum, mentioned above).

 

Domain-validated certificates simply validate control over a domain name,” notes Netcraft. “Organization-validated certificates include the identity of the organization; and Extended Validation certificates increase the level of identity checking done to meet a recognized industry standard.” The shorthand for each of these SSL certs are DV, OV, and EV.

 

The domain-validated cert is the last expensive. Since businesses probably vastly under-value the role of an SSL certificate in terms of adding credibility and trust to their site, this cheapest variety is by far the best seller with nearly 70% of all sales. Meanwhile, extended validation, the most expensive but least appreciated cert, represents under 5%. The rest are OV.

 

Now, just consider this argument that the EV SSL is the way to go even though it is currently the least popular version: As mentioned above, Symantec does a case study of Overstock.com, via an independent third-party research group, and finds that there is an 8.6% decrease in abandoned shopping carts in EV-enabled browsers.

 

Consider that Overstock is already a highly recognized brand (so assumedly the credibility boost is lower than for most sites) and that this was essentially a split-test. An EV cert costs less than $300 more per year with a top brand from Symantec, GeoTrust. Simply put, if you do the math, this investment often makes sense.

 

Securing your site with SSL

Are you interested in what site-wide SSL might do for your conversion rate or bounce rate? Or do you just need a cert to encrypt your logins or ecommerce? Keep your transactions and communications secure with our SSL certificates at Total Server Solutions.

Posted by & filed under List Posts.

Have you considered using an SSL security certificate on your website? This technology has been growing exponentially since it was first introduced in the mid-1990s. Let’s look at whether SSL might be right for your site, the issue of partial vs. complete implementation, and some thoughts on common adoption issues.

  • What is an SSL security certificate?
  • Study: Certificate growth rapid across the Web
  • Should you have SSL on your site?
  • Is site-wide SSL right for you?
  • Tips & issues when adopting SSL
TSS

 

What is an SSL security certificate?

The SSL/TLS protocol (which stands for secure sockets layer / transport layer security) is a simple way to secure exchange of information online. Accepted and promoted by all the major browser and operating system companies, certificates that follow its standards are responsible for the HTTPS protocol; lock icon; and, in some cases (with specific types of additional validation), green indicators and/or company validation. Through these means, which populate automatically once this technology is installed, you are able to establish a private connection with whomever uses your system.

 

In order to get an SSL working with your site, you are essentially coupling whatever domain or subdomain you designate with a cryptographic key. No matter what level of certificate validation you purchase (domain, organization, or extended), the verification and connection of your site is performed by a certificate authority (CA). The CA signs the certificate so that anyone visiting your site can (if they choose) check the firm behind your security mechanisms.

 

Study: Certificate growth rapid across the Web

Brand-name SSL certificates are the majority of the ones found online. Although generic certs can be used to encrypt, they generate browser error messages since they are unofficial. The good news is that while there is a range of SSL certificate prices, there are certainly options to fit every website’s budget.

 

Probably the primary ongoing analysis of SSL adoption is the Netcraft SSL Server Survey. It “has been running since 1996 and has tracked the evolution of this marketplace from its inception,” notes Netcraft. “[T]here are now more than one thousand times more certificates on the web… than in 1996.”

 

Looking at the simple number of certificates is the easiest and clearest way to gauge adoption, although it should be understood that sometimes the same certificate is used on multiple sites. Again, as with non-brand certs, you will get browser messages that don’t mean your SSL chain is broken but that visitors cannot validate site ownership, causing obvious trust issues.

 

Should you have SSL on your site?

Is it a good idea to get an SSL certificate for your site, for a portion of it or the entire thing? Let’s look at that issue.

 

The first thing in favor of using SSL is that Google and other search engines now give you a better ranking if you implement the technology. That’s one factor in favor of site-wide use.

 

Do you do ecommerce on your site? Then you want one, notes Web developer Andrea Whitmer. “If you’re taking credit card payments directly on your website, you definitely need SSL in place to encrypt your customers’ credit card information,” she says. “However, that doesn’t necessarily mean you need it on your entire site.”

 

OK, so why would you want it on just a part of your site? SSL encryption does, as you can imagine, slow down your speed a bit, which can be a hit to your user experience (obviously countered with the positive UX of the security you’re providing). You might want it in your shopping cart, for example. However, you don’t need one for PayPal purchasing (since PayPal itself takes care of the SSL).

 

Not everyone has their own ecommerce app, but you probably have a way for people to create user accounts. Assuming that’s the case, you do want encryption for those logins so that the accounts actually are private and safe. “After all, your members are giving you their email addresses, names, and passwords, all of which they likely use on other sites,” says Whitmer. “Do you really want to risk being responsible for a security breach that results in your members’ information being spread across the whole internet?”

 

Even if people don’t have accounts, you similarly will want an SSL certificate if people are sending or uploading personal data through a form. If you have forms in addition to logins and shopping, you want a cert on each of those site areas. If they are within subdomains, you can get a cert type called a wildcard that covers unlimited subdomains. Otherwise, a standard certificate covers just one domain or one subdomain.

 

Generally speaking, businesses that are only posting content do not bother with an SSL certificate. That’s because there is no particularly sensitive information changing hands and no need to comfort someone with a reputable-brand cert while they proceed through a sales funnel.

 

Is site-wide SSL right for you?

 Here are three advantages to applying SSL technology on your entire site rather than just sections in which users are logged into the system or buying:

 

  1. User confidence. Everyone feels a little uneasy when they are getting ready to put payment information or even a physical address into the system of an unfamiliar company. It’s easy to reduce the fear of using your site or making a purchase by adopting SSL – so guests can be comforted (for good reason) by the lock icon.
  2. You may just want to figure out if your site traffic and engagement will or will not improve with SSL. You can implement for a rough idea, or split-test for greater clarity.
  3. The future-proofed site. If you think you are going to want areas of your site in the future that will use this technology, it can be a good idea to set up a cert well in advance rather than being unfamiliar with this component moving into a launch.

 

Tips & issues when adopting SSL

  1. Social numbers: Sometimes people will experience serious frustration when social shares, and the social proof that impresses potential buyers, are deleted in the process of transitioning to the new protocol. For those situations, if you are on WordPress, there is a plugin called Social Warfare that allows you to get back any share data that floats away.
  2. Social plugins: The plugins are often not secure, and when their setting is switched over to https, you can end up with a number of glitches. This requires troubleshooting.
  3. Internal links: currently your site is linking to http versions of the site. Once the sites becomes https, you will need everything to forward to the secured version. A 301 redirect is a quick fix, but you may not want to rely so heavily on a redirect – in which case, you simply need to add the s (following http) within the URLs.
  4. Additional plugin problems: Many plugins were not built to work correctly with https. Expect to potentially have to switch to a different plugin or at least to get a patch enabling you to use the SSL without error messages.
  5. Webmaster Tools: “[R]emove and re-add your site in Google’s Webmaster Tools (or at least do a change of address) and submit a new sitemap to force re-indexing of your site using https,” notes Whitmer. Be aware that when you submit the new map, you may see your traffic temporarily decline.

 

Should you get an SSL cert?

Maybe you know exactly what you need in an SSL certificate. If not, we can advise you further. At Total Server Solutions, our expert team is made up of individuals with the highest levels of integrity and professionalism, allowing us to guide you in the direction of a comprehensively optimized website. See our SSL Certificate Options.

Posted by & filed under List Posts.

Cloud is fast becoming the go-to solution for business computing systems, replacing the traditional legacy model. How can you make the most of a cloud transition?

  • Cloud becoming the dominant business technology
  • Tip #1 – Consider your goals.
  • Tip #2 – Scrutinize your options.
  • Tip #3 – Look at your current investment.
  • Tip #4 – Dip a toe at a time.
  • Fast, reliable, scalable cloud
cloud-tss

 

Cloud becoming the dominant business technology

Are you looking at how your company should be spending its computing budget? The extent to which organizations are committing to the cloud really is kind of stunning. In fact, more than nine out of ten businesses (93 percent) have implemented some type of cloud solution, according to the annual RightScale State of the Cloud report.

 

As would be expected as the the industry becomes more developed and mature, companies are introducing more complexity into their cloud services. That’s because many companies are choosing to blend different options. More than four in five firms (82%, a rise from 74% in 2014) say that their cloud is a hybrid – an integration of a private cloud (hosted on-site or through a third party) with a remote public cloud in an independent data center.

 

Of course there has been a huge amount of hype surrounding this technology, but it’s also central to a very real computing revolution – a move to the third platform (cloud, mobile, social, and big data). Generally speaking, information technology has experienced “a shift toward purchasing virtualized, digital services that replace physical equipment,” reports the Wall Street Journal.

 

As cloud becomes more prevalent, the conversation about its general benefits becomes a discussion of how to migrate successfully.

 

Tip #1 – Consider your goals.

It’s important to know the position of your business and how you are intending to become better via cloud adoption. Becoming more agile and flexible (so you can adapt quickly to changing marketplace conditions) is the biggest advantage, according to the Open Group. Here are five other primary benefits:

 

  1. Cut your costs
  2. Consolidates your systems and make them easier to manage
  3. Gives you access at any location you have Internet
  4. Allows you to work more easily and immediately with others (internal and external)
  5. Is the sustainable choice because it’s designed for optimal infrastructural efficiency, with lower power use.

 

Tip #2 – Scrutinize your options.

You want to gauge different providers from every possible angle, of course. Look at these parameters:

 

Security

 

“What you want to know is how the cloud provider manages data security, its history of regulatory compliance, and its data privacy policies,” says Business.com. “If a cloud service has clients that deal with confidential and sensitive information you can have some degree of confidence they’ll handle your data in a similarly secure fashion.”

 

In other words, you want to look for PCI compliance and an SSAE-16 Type II audit, signs that the cloud abides by strict IT standards. Also check for testimonials or other reviews.

 

Affordability

 

It can be a little tricky to figure out exactly what a cloud service is going to cost. Make sure that your service-level agreement (SLA) is clear and properly protects you. Ask whatever questions you may have so that you aren’t caught off-guard.

 

Public, private & hybrid

 

Private clouds are sometimes preferred by organizations for compliance or to have the utmost possible control of the system. It also allows businesses to customize parameters as needed. The primary issue with a private cloud is that it is expensive, because you aren’t leveraging the same economies of scale as you are with the public version (since that one involves multiple clients). It’s easier to scale public cloud also – which is helpful not just for business growth but for seasonal businesses and even common peaks such as Black Friday.

 

You are essentially able to use an operating-expense rather than a capital-expense model. You pay for what you need, the actual amount of data you need to process. The cloud service provider (CSP) keeps the system properly up-to-date and safe, which in turn means you can focus on their core business.

 

Tip #3 – Look at your current investment.

The cloud is probably most attractive to startups simply because there’s so little upfront expense. Some companies already have their own data centers, though.

 

Brian Posey notes in TechTarget that companies often leave behind their legacy architecture, in part because it is always on the road toward decay. “Outsourcing a server’s data and/or functionality to the cloud may mean abandoning your on-premises investment unless an-on premises server can be repositioned,” he says. “No matter how good it is, any server hardware eventually becomes obsolete.”

 

Large companies understand that their infrastructure will eventually no longer be usable, of course. The standard way to build equipment’s aging process into the business plan is through a hardware lifecycle policy. A very straightforward one, for instance, would be to get rid of all servers once they have been deployed for five years.

 

Keep in mind that cloud is not an either/or proposition. Many organizations choose to interweave their lifecycle policy with their adoption of cloud. This simple step makes it possible for IT teams to switch from on-site servers to cloud rather than buying updated equipment.

 

That’s also evident in the hybrid cloud scenario, which is fundamentally an integration of private and public cloud components (with the private cloud either on-premises or hosted in a third-party data center). Some companies choose to keep certain systems in their own facility because the process of redesigning and testing them for cloud doesn’t make sound business sense immediately. While older applications typically involve more debate, new apps are more often built for cloud without hesitation.

 

While traditional computing is still used for portions of many companies’ infrastructures, you do want to explore whether it makes sense to keep any legacy systems in place at all. Patrick Gray of TechRepublic thinks that cloud is quickly become the successor to the dedicated approach to computing. “Cloud computing has completely revolutionized several sectors that were once dominated by large and expensive legacy applications,” he says. “The CRM (Customer Relationship Management) space is a major example, where companies can now provision an enterprise solution with a credit card.”

 

Just that one example means that firms don’t have to handle a major capital expense. Plus, you don’t have to get specialists to assess the type of equipment you need and engineers to set it up in your data center. You can see the sea change that can occur when you decide that you will no longer be focusing internally on maintaining your own raw infrastructural resources for computing.

 

Tip #4 – Dip a toe at a time.

One thing you want to remember about cloud, as indicated above, is that it doesn’t require you to toss your current hardware. In fact, one reason people are so attracted to the technology is because you can access whatever amount of computing power you need, changing it as you go.

 

Gartner analyst Elizabeth Dunlea says that the best way to approach cloud is to think of it in terms of the needs you are meeting as opposed to a collection of technological components. “By tackling one service at a time, it’s easier to measure what worked and what didn’t,” she says. “This is where best practices are drawn for future deployments.”

 

Fast, reliable, scalable cloud

Are you in need of cloud hosting that meets your expectations for this revolutionary, highly touted technology? At Total Server Solutions, our ultra-fast hardware and far-reaching network make everything easier and more transparent. We do it right.

Posted by & filed under List Posts.

The statistics on WordPress security are in some ways a little grim. That’s the case in large part because many sites aren’t spending the time or energy to take the necessary precautions. How can you safeguard your site?

 

  • Hardening your site & WordPress risk-taking stats
  • 10 basic steps to harden WordPress
  • Expert WordPress hosting

 

Hardening your site & WordPress risk-taking stats

When IT folks talk about security, they often use the word hardening. That’s an apt term because it is a way to make your defenses more rigid, to make it more difficult to compromise your perimeter. You want your walls and gates to be multiply reinforced, and you want sentries posted so that no one enters within the private areas of your site who’s unverified.

 

You may not consider yourself well-versed on WordPress security or how to generally safeguard a website, but you have probably picked up a few ideas from the many pieces out there on the topic. If you haven’t done much to address security yet, it’s certainly compelling to look at some sobering statistics:

 

  1. An incredible 31% of targeted cybercrime efforts in 2012 were of small business, notes the National Cyber Security Alliance. To put it another way, about 20% of small businesses get hacked annually. Three out of every five outfits that are compromised are bankrupt within just half a year.

 

  1. WordPress security should by no means be assumed, says Brenda Barron of WPMU Dev. “Did you know 73% of the popular sites that use WordPress were considered ‘vulnerable’ in 2013?” she asks. “Or that of the top 10 most vulnerable plugins, five were commercial plugins available for purchase?”

 

10 basic steps to harden WordPress

 As clearly seen in the above statistics, it’s a mistake to think that security isn’t paramount online or that WordPress is in any way fundamentally safe. Here are 10 ways to secure your account:

 

Step #1 – Passwords

If you want your house to be secure, get strong keys made. If you want your site to be secure, you have to have really strong passwords. One of the most prominent random password generators, at whatever point you might need a password, is Perfect Passwords. Randomizing really is worthwhile so that there is no connection to you or even to the English language, making it much more complicated for someone to guess.

 

You also want to treat passwords with care. Don’t share them with anyone. Don’t use common words (and ideally randomize them). Plus, use different passwords for each of you accounts.

 

Now, hardly anyone follows that last rule: three-quarters of web users have the same password for Facebook and their email, according to a study from BitDefender. That really is a vulnerability that’s unnecessary, according to Eric Griffith of PC Magazine. Simply develop a system for remembering your passwords (such as acronyms based on stories, with numbers and symbols thrown in) or a storage system to implement optimally diverse randomized passwords.

 

If you don’t want to use a random password generator, here are four steps to create one that is similarly obtuse, from Griffith: “Spell a word backwards… Substitute numbers for certain letter… Randomly throw in some capital letters… Don’t forget the special character.”

 

Finally, be sure to use at least ten characters per password.

Wordpress-logo

 

 

 

 

 

Step #2 – Updates

You want to always stay abreast of updates. Any time an update comes out, it’s easy to think it’s some pointless effort to introduce features that you might not even use. You don’t want to be sitting around updating all the time rather than actually using the system.

 

Keep in mind that security patches are introduced to plug holes through which hackers could enter. If you don’t take advantage of those updates, it’s almost as if you were inviting the hackers in. Updates are first-priority and always should be, and that applies to the overall WordPress, along with themes and plugins.

 

WordPress developer Jerod Morris notes that many people are fearful of updating their site because they don’t want to lose data or have technical problems. “[I]f you’re afraid of it, then you need to re-evaluate your theme and plugin strategy,” he says. “Your theme will certainly get disrupted when a hacker injects half a page of a nasty encrypted code into it.”

 

Additionally, you want to be careful what plugins you include. Don’t think of them as part of WordPress because they are independent. Make sure they are updated often. Also consider paying for support.

 

Step #3 – Admin

Changing “admin” to a different name is a simple step, but realize hackers can find usernames elsewhere – such as from blog posts. Rather than focusing on the username, it’s more important to center yourself, again, on password strength.

 

You may also want to use something like a Yubikey – a small device that offers two-factor authentication at the touch of a button. Whether you use this solution or any other, the introduction of a physical component has to vastly improve your security.

 

Step #4 – Brute force

Hackers love getting into sites. In fact, many large sites are hit with hundreds or even thousands of failed login attempts per hour!

 

How do you defend yourself? First, make sure your web host prioritizes security and that you are protected against brute force from that end. Second, you can use an app such as Limit Login Attempts to defend yourself.

 

Step #5 – Malware detection

You must have some kind of malware protection in terms of server-side scanning. Again, your web host should cover this – but make sure you know how your systems are protected.

 

Step #6 – Malware removal

Be aware it’s important that your malware solution should go beyond scanning to cleanup. “A couple of the oft-overlooked ‘true costs’ of WordPress ownership are those associated with downtime due to security issues and cleaning up those issues,” says Morris. In other words, you want to make sure your hosting partner is able to keep you up and running, or help you quickly recover if you do get attacked.

 

Step #7 – Choice of web host

Many people have always chosen to have their own dedicated machine; but don’t make the mistake of thinking cloud hosting exposes you to the same dangers as a traditional shared server. Keep in mind that the security industry is highly focused on the cloud industry – so you should be able to find secure settings in the public cloud. Many industry thought-leaders have commented that public cloud is more secure than the typical on-premises datacenter because systems are monitored and patched immediately, behind the scenes, by expert full-time security personnel and automated cloud mechanisms.

 

That said, your host should really have security, along with things like customer service and performance, as one of its top priorities. They should understand that the threat landscape is always evolving and that they need to approach it dynamically.

 

Step #8 – Dirty dishes

Think of old plugins and themes that are attached to your installation but unused as dishes with food growing older, more rotten, and more attractive to pests and rodents. Clean up the kitchen.

 

Cleaning up is also important because if you do get hacked, it’s easier for a pro to come in and remove the problems if they can see everything clearly.

 

In addition to getting rid of unnecessary components, you also want to organize your file structure. Look at the default WordPress core, and see how your list compares. If you have a few extra files, that’s fine; but you don’t want to have twice as many or more.

 

Expert WordPress hosting

Are you looking for a secure WordPress hosting environment? At Total Server Solutions, we are audited to meet the requirements of SSAE 16, Type II – a gold standard of security developed by the American Institute of CPAs. See our first-line defenses.

Posted by & filed under List Posts.

Location:  Buckhead, Atlanta, GA

Shift:  8:00AM – 4:30PM

Total Server Solutions is a cutting edge data center & hosted services company based in Atlanta, GA.  Our goal is to provide the best, fastest, and most complete technical services to our customers.  At the moment though, we’re missing a key piece of the puzzle.  You!  We employ some of the best and brightest minds in the tech industry.  If you think you’d be a good fit, please read on.

Total Server Solutions is looking for a highly motivated, experienced, knowledgeable Linux/UNIX systems administrator to round our tech team.  If you have years of experience managing large Internet based application clusters, heroic organizational skills, and revel in diagnosing and fixing problems, you’ll be a great fit.  As one of our Linux/UNIX system admins, you will be responsible for working out solutions to complex problems that our customers may encounter during their daily operations.  Great problem solving skills are a must.  As a growing, globally oriented company, we offer a relaxed work environment and great benefits.  We look forward to hearing from you!

Requirements:

  • 0-1 years of supporting Linux Servers in a production environment; CentOS or Redhat variants.
  • Motivation and ability to quickly learn and adapt.
  • Prior experience within a critical production environment.
  • Knowledge of LAMP Architectures (Perl/PHP/Python).
  • Knowledge of RedHat, CentOS, and other RPM based distributions.
  • Knowledge of ecommerce platforms (Magento, X-Cart, PinnacleCart, and CS-Cart).
  • Knowledge of  virtual environments (vmware and on app).
  •    Knowledge of monitoring systems
  • Knowledge of Backup/Recovery/Upgrade procedures.
  • High degree of independence and exceptional work ethic with exceptional communication skills.
  • Experience with control panel technologies including cPanel, Plesk, DirectAdmin.
  • Must be located, or willing to relocate to Atlanta, GA area.
  • Ability to work weekends and holidays.

Not Required but a huge plus:

  • Experience with management tools such as Puppet, Chef, etc.
  • Experience with automated system deployment tools and building pxe/kickstart/etc deployment scripts.
  • Bilingual. (Spanish a plus)
  • Experience with load balancing technologies.
  • Red Hat certifications

What you’ll be doing:

  • Linux server maintenance, monitoring, security hardening, performance review
  • Managing MySQL database operations and all things database related.
  • Researching new platform architectures to support business requirements.
  • interact with customer and provide technical support via our helpdesk and live chat

What’s in it for you:

  • Competitive Salaries.
  • Medical Insurance.
  • Paid Time Off.
  • Educational Reimbursement.
  • Employee Activities.
  • Paid Parking.
  • 401k.

If you are a Linux System Engineer, Linux System Administrator or Linux Engineer with experience, please apply today by contacting careers@totalserversolutions.com!  When contacting Total Server Solutions, please state your salary and any other compensation expectations.

Total Server Solutions is proud to be an Equal Opportunity Employer. Applicants are considered for all positions without regard to race, color, religion, sex, national origin, age, disability, sexual orientation, ancestry, marital or veteran status.

Posted by & filed under List Posts.

As companies have increasingly realized the value of geographically distributing their infrastructure in recent years, one of the most important technologies is the content delivery network (CDN). The use of CDNs, simply put, has been skyrocketing. Let’s look at what this solution is, how broadly it’s used, and why it is important for your business.

 

  • What is a CDN?
  • Astronomical growth of content delivery networks
  • Improvement over traditional web hosting
  • Relationship between time and distance
  • Transfer limitations
  • More than one location for a faster site
  • Distributed file transfer for better delivery to more users
  • Accelerating your business with a CDN

 

Think about a day in which the Internet is just a bit more rapidly responsive. You go online, type a URL into your browser, and everything pops up instantaneously. For anyone who has ever railed at their computer for taking too long to load a page, you can imagine how great that user experience would be. Although the Internet isn’t currently beyond the issue of long load times, we now have the means to accelerate the Web more than ever before. A chief technology in this category is the content delivery network, or CDN.

 

What is a CDN?

A content delivery network is a vast, integrated network of servers that cache the files of your site and use IP location to determine which machine sends visitors your content.

 

You have multiple versions of the same files stored on servers that are intentionally positioned in or near high-traffic areas worldwide, explains Margaret Rouse of TechTarget. “CDN management software dynamically calculates which server is located nearest to the requesting client and delivers content based on those calculations,” she says. “This not only eliminates the distance that content travels, but also reduces the number of hops a data packet must make.”

 

There are multiple advantages to this strategy actually, not just speed. You don’t experience as much packet loss. Your bandwidth is better utilized. You are able to reduce jitter, latency, and time-outs. Beyond this IT jargon, you are simply able to better meet the needs of your users, so a CDN should effectively boost your engagement and revenue. Furthermore, if you experience a breach or systemic failure, at least some of your traffic will still be able to access your site’s content.

 

Astronomical growth of content delivery networks

If this system sounds compelling to business, well, it certainly is, according to the growth of the industry. Incredibly, market growth for CDNs is expected to increase from $4.95 billion to $15.73 billion between 2015 and 2020, says analyst MarketsandMarkets. Yes, that’s more than tripling. It adds up to an eye-popping compound annual growth rate (CAGR) of 26.0%.

 

OK, so CDNs are trendy, but we all know popularity isn’t everything. Let’s look more closely at why this model might be useful to your business.

 

Improvement over traditional web hosting

If you are currently using traditional web hosting, here is why it makes sense to implement a CDN. In the old-school hosting model, your files (HTML, images, CSS, etc.) are all stored in one datacenter. When current users or potential customers come to your site, everything has to be sent out from that one “data home office.” Having your information and content centralized rather than dispersed internationally is problematic for a few reasons, as touched on above and further described below.

 

Relationship between time and distance

As an example, perhaps your traditional datacenter is in Texas. Whenever anyone wants to access your site, they are asking your Texas servers for the content. The amount of time it takes between the request and retrieval will grow the farther away a person is from Texas, because the files actually have to be sent that 500 or 1000 or 6000 miles. That means your website does not load as quickly for people who are at more of a distance. Australians, for example, will get an awful user experience compared to Texans.

 

rack-stack

 

Transfer limitations

“Consider a situation when multiple users are all trying to load your website at the same time,” notes Lukas Rossi in How to Get Online. “Just as your personal computer has limitations as to how fast files can be transferred across the network (throughput rate), servers also have limitations as to how fast they can transfer files.”

 

Traditional web hosting is not as able to scale to handle peaks in demand. That’s particularly the case with shared hosting, in which you often aren’t given guaranteed transfer rates. With any traditional hosting, your transfer rate will sometimes impede your ability to deliver a professionally reliable, strongly performing website to people who want to see it.

 

When you recognize the fact that this issue arises with traffic spikes, think Black Friday or whatever your “high season” is. Traffic rises, and your load times go down, because content is being delivered to a large number of interested parties simultaneously. In other words, it’s when your site has to really “have its game-face on,” and instead (in this scenario), you are slow and losing customers.

 

Moving beyond this comparison to traditional hosting, let’s look again at the particular strengths of a CDN.

 

More than one location for a faster site

Think about a user who is at a great distance from your web host’s datacenter. That person has to wait, perhaps frustrated and impatient, as your site loads, waiting for files to arrive from some faraway place.

 

On the other hand, “[a] CDN will serve content from an edge server that is either closest to or most efficient for each individual end user, based on where they are located in the world,” explains Rossi. “For example, if someone in China were to load your website, the CDN might automatically load a copy of your website’s content from a server in China.” For someone who is in England, a UK-based server within the CDN network would deliver the site instead.

 

All of this process occurs behind the scenes. In a way, you can think of a CDN as a form of automated UX customization, similar to marketing tactics such as behavioral targeting (in which users are targeted with ads based on how they’ve previously interacted with your site). The CDN figures out which server is best to serve that particular user. In fact, a high-quality CDN will go beyond distance and include the availability of network servers as a factor, so that speed is always optimized for the most efficient possible use of your site.

 

Distributed file transfer for better delivery to more users

As indicated above, servers are machines with specific parameters on data transfer. A content delivery network makes your site load fast consistently regardless of how many people are trying to use it at a given time.

 

Because of the basic architecture of a CDN, numerous people coming to your site at once will mean that various servers in different locations will be utilized, Rossi comments. “In this way, one particular server is not flooded with all of the requests from users,” he says. “CDN providers will also implement other procedures in order to ensure that your files will load efficiently even amidst a spike in traffic.” As a basic example, if a New York server is being hit particularly hard and a DC one is available, the CDN will be able to calculate when to shift a New York request to DC for better performance.

 

Accelerating your business with a CDN

Are you interested in optimizing your performance, improving your security, and enhancing your redundancy with a content delivery network? At Total Server Solutions, our CDN nodes are close to your customers, wherever they are. Get the reach you need.

Posted by & filed under List Posts.

Cloud continues to grow and is now considered by many to be the go-to technology for conducting business. One area in which cloud is particularly transformative is the human resources department. 

  • Cloud forecasts & market estimates
  • How cloud changed onboarding at Brooks Brothers
  • Overcoming “If it ain’t broke”
  • Fast-forward to the 21st century
  • Passing of torch from baby boomers to millennials
  • HR’s emergent role in strategy
  • Getting started

Perhaps no technology in history has been more greatly hyped than cloud computing, and that really is saying something. However, the astronomical growth of the cloud over the last few years is evidence that distributed virtual computing is not just trendy but really does offer significant advantages in terms of management, monitoring, speed, and affordability. Beyond those kind of pat business improvements, cloud also represents a sea change in the way companies handle IT and fundamentally alters the workflow and even the culture of businesses.

Let’s look at the current state of the cloud computing market and the specific effect the rise of cloud is having on HR.

 

Cloud-Night

 

Cloud forecasts & market estimates

Various business analysis organizations (Gartner, Forrester, IDC,etc.) measure and predict the growth of cloud each year. In a review of recent reports, Louis Columbus of Forbes noted what he found to be a particularly salient point on the difference between businesses that have embraced cloud vs. those that haven’t. “The Economist found that the most mature enterprises are now turning to cloud strategies as a strategic platform for growing customer demand and expanding sales channels,” he says. “The study found low-maturity or lagging cloud adopters focus on costs more than growth.”

How about the specific numbers, though? Here are a few of the stats Columbus culled from these industry studies:

  • The market for cloud platform (PaaS) and infrastructure (IaaS) will achieve a stunning CAGR of 19.62% between 2015 and 2018 to hit $43 billion.
  • Nearly half of companies in European Union (EU) countries use cloud for their accounting software, customer relationship management, or for infrastructural resources to power business programs.
  • Nearly two-thirds of SMBs (64%) use cloud applications, while more than three-quarters (78%) say that they intend to adopt new apps by 2017 or 2018.

Example:  How cloud has changed onboarding at Brooks Brothers

Onboarding of new employees at Brooks Brothers used to be extraordinarily tedious and time-consuming in the pre-cloud, paper-centered era, according to the company’s talent management director, Justin Watras. “In a perfect scenario they’d show up with a bunch of employment documents, but more often than not they forgot or weren’t told,” he says. “Either way, a manager or HR member would have to devote significant time to sitting with them and filling out paperwork.”

Today, Brooks Brothers simply emails the new recruit a link to digital versions of the same documents, asking them to fill it all out prior to their start date. The email also provides instructions and a login to get into the company’s system and access important employee information and a video featuring CEO Claudio del Vecchio.

 

Overcoming “If it ain’t broke”

Back to the “strategic platform” comment from The Economist above. When companies understand the power of cloud, HR sees some of the most monumental improvements. To handle basic employee elements such as attendance and benefits, long-established firms have legacy systems on-site that they’ve spent years tailoring to their own needs.

“Companies have spent loads of money on installed systems, and often they’re not terribly excited about spending a ton of money to change,” explains Forrester analyst Claire Schooley.

While it’s reasonable that businesses want to customize their on-premises environments, it also makes everything unnecessarily complex – especially since HR doesn’t have the tight controls of something like finance and can get technologically creative as desired. Plus, human resources isn’t making the company money, so it’s often under-funded. Many processes are seemingly stuck in time, remaining essentially the same for decades.

 

Fast-forward to the 21st century

When Brooks Brothers moved to the cloud in 2014, it left behind clunky on-premises payroll software that was customized to achieve various HR needs.

At Brooks Brothers as at many organizations, adopting cloud wasn’t just about technology but about making the organization and its informational systems more democratic. It was about ownership. Store managers used to have to commit huge chunks of their time to employee paperwork; their time was effectively controlled by the process. Now that burden has been removed since data ownership is cloud-virtualized.

“We sold this to managers by saying, ‘Yes, you’ll be responsible for inputting some data, but you’ll also have access to all this data… that empowers you to support and to know and grow your people,” Watras notes.

What specific advantages has Brooks Brothers seen? Productivity has risen 10% due to managers being freed from tedious onboarding tasks: 15 hard-copy processes that are now performed digitally.

 

Passing of torch from baby boomers to millennials

The concept of transformation typically seems horrifying to anyone who is focusing on the bottom line. However, cloud is heralded for its immediate and ongoing budget-friendliness. There is no big price tag for preparation and deployment as there is with a legacy system – which is also pricier to maintain.

Additionally, user-friendliness is essential as millennials continue to take over the positions left behind by retiring baby boomers. User-friendly systems create better employee engagement, and happy workers are more productive – especially true for people who grew up in an increasingly UX-geared culture. It’s important to not only account for multiculturalism with your workforce but also multigenerationalism if you want optimal results.

The relatively recent rise of cloud services has ushered in more consumer-friendly, “natural” interfaces built for the age of the mobilized, interconnected third platform. Since cloud is more geared toward standardization than customization, it prompts companies to reevaluate processes that have often overstayed their welcome with employees. Many of the ways that companies have altered software to meet their own ends are about propping up outmoded processes that should be obsolete.

Organizations sometimes tell Schooley that the ways they have tweaked their environments are critical, and she has to convince them: “I’ll say,… ‘What areas do you absolutely have to have customized or you can’t move forward?’ I’ve had organizations come back and say it’s really just two or three.”

 

HR’s emergent role in strategy

Predictive analytics will continue to become more sophisticated with cloud tools, and that will allow human resources departments to gradually play a stronger role in business strategy.

“My dream… is that when we have the executive team come together for weekly reviews of the business,” says Watras, “they’ll be looking not just at financial and other dashboards but also the people analytics.”

For instance, when a store is unsuccessful, more complex and automated analysis of data will allow the leadership to evaluate factors such as pay scale and workforce retention alongside traditional gauges such as location density and rent. In that scenario and others, the human resources department could be converted from a part of the business that costs it money into one that provides value.

 

Getting started

Is it time for your company to take a strategic leap and transition from legacy to a cloud HR system? At Total Server Solutions, our cloud hosting boasts the highest levels of performance in the industry. Your cloud starts here.

Posted by & filed under List Posts.

High performance infrastructure & compute resource provider announces participation in UCX Exchange to retask available compute resources.

Atlanta, GA — Total Server Solutions, a top-tier high performance infrastructure, cloud computing, and bare metal resource provider officially announced that it has joined UCX, the Universal Compute Xchange, and will be an active market-maker of bare metal and cloud resources on the exchange.

UCX is an on-demand spot exchange where resource providers such as Total Server Solutions can offer unused, or underutilized resources, within a real-time marketplace.  Potential customers, investors, or anyone who requires high performance infrastructure, bare metal, or cloud resources can bid on what they need as offered by an array of top-tier providers.  UCX vets all service providers to ensure they meet specific performance and business metrics.

Gary Simat, CEO at Total Server Solutions states “We are excited to offer some of our resources for trade within UCX as they come off contract.  This helps us realize a greater return on investment in infrastructure while allowing customers to get favorable rates on what they need.”

By bringing standardization to compute resources, UCX is working to transform the way that compute resources are allocated, used, and traded.  Their unique approach allows underutilized resources to be acquired at true market rates by customers.  In turn, this helps providers keep their resources fully utilized.

“We are delighted to add Total Server Solutions as our newest Member Provider on the exchange.  Gary and his team have been fantastic to partner with and we look forward to leveraging their global footprint of data center resources,” said UCX COO Tim Martin.

UCX leverages a state-of-the-art trading platform that it licenses from CME Group to create a central price discovery mechanism to trade digital assets.  Buyers bid on excess capacity offered by service providers, which compete for buyers’ business in real-time.