Posted by & filed under Careers.

hiring

 

Location:  Buckhead, Atlanta, GA

Shift:  7:00AM – 3:30PM or 3:00PM-11:30PM

 

Total Server Solutions is a cutting edge data center & hosted services company based in Atlanta, GA.  Our goal is to provide the best, fastest, and most complete technical services to our customers.  At the moment though, we’re missing a key piece of the puzzle.  You!  We employ some of the best and brightest minds in the tech industry.  If you think you’d be a good fit, please read on.

 

Total Server Solutions is looking for a highly motivated, experienced, knowledgeable Linux/UNIX systems administrator to round our tech team.  If you have years of experience managing large Internet based application clusters, heroic organizational skills, and revel in diagnosing and fixing problems, you’ll be a great fit.  As one of our Linux/UNIX system admins, you will be responsible for working out solutions to complex problems that our customers may encounter during their daily operations.  Great problem solving skills are a must.  As a growing, globally oriented company, we offer a relaxed work environment and great benefits.  We look forward to hearing from you!

 

Requirements:

  • 3+ years of supporting Linux Servers in a production environment; CentOS or Redhat variants.
  • Motivation and ability to quickly learn and adapt.
  • Prior experience within a critical production environment.
  • Solid knowledge of LAMP Architectures (Perl/PHP/Python).
  • Knowledge of RedHat, CentOS, and other RPM based distributions.
  • Experience with replication, clustering, tuning, sizing, monitoring including operating systems for running the LAMP stack.
  • Experience in Shell Scripting (bash preferable).
  • Experience in ecommerce platforms (Magento, X-Cart, PinnacleCart, and CS-Cart).
  • Experience in virtual environments (vmware and on app).
  • Experience with Splunk, Zabbix or other system/device monitoring & logging tools.
  • Knowledge of Backup/Recovery/Upgrade procedures.
  • Experience working in 24/7 operational environments.
  • Expectation to be challenged.
  • High degree of independence and exceptional work ethic with exceptional communication skills.
  • Experience with control panel technologies including cPanel, Plesk, DirectAdmin.
  • Must be located, or willing to relocate to Atlanta, GA area.
  • Ability to work weekends and holidays.

 

Not Required but a huge plus:

  • Experience with management tools such as Puppet, Chef, etc.
  • Experience with automated system deployment tools and building pxe/kickstart/etc deployment scripts.
  • Bilingual. (Spanish a plus)
  • Experience with load balancing technologies.
  • Red Hat certifications

 

What you’ll be doing:

  • Linux server maintenance, monitoring, security hardening, performance review
  • Managing MySQL database operations and all things database related.
  • Researching new platform architectures to support business requirements.
  • interact with customer and provide technical support via our helpdesk and live chat

 

What’s in it for you:

  • Competitive Salaries.
  • Medical Insurance.
  • Paid Time Off.
  • Educational Reimbursement.
  • Employee Activities.
  • Paid Parking.
  • 401k.

 

If you are a Linux System Engineer, Linux System Administrator or Linux Engineer with experience, please contact careers@totalserversolutions.com today!  When contacting Total Server Solutions, please state your salary and any other compensation expectations.  

 

Total Server Solutions is proud to be an Equal Opportunity Employer. Applicants are considered for all positions without regard to race, color, religion, sex, national origin, age, disability, sexual orientation, ancestry, marital or veteran status.

 

Posted by & filed under List Posts.

Want to impress people who visit your site? Here are almost three dozen different ways to improve your blog or site so you can better engage with visitors.

 

  • Nearly 3 dozen WordPress design tricks
  • High-speed cloud WordPress hosting

 

The majority of WordPress instances have a similar look and feel. If you want yours to be eye-catching and memorable, it’s critical to make this platform your own, customizing it within the PHP code and theme.

 

AISLE-1

 

Nearly 3 dozen WordPress design tricks

Here are a bunch of different ways you can make your WordPress blog your own:

 

Blog post submission tool

Create forms that you can adjust to your own needs, permitting subscribers or other users to send in blog posts directly through the site.

 

Comment pagination

With Paginated Comments, divide all comments into various Google-friendly pages.

 

Image captioning

With Image Caption, create a caption beneath any images on your blog, populating the information from the title or alt attribute. You can use your own CSS styling too.

 

Random redirection

With the plugin Random Redirect, simply randomize all your content. It “allows you to create a link to yourblog.example.com/?random which will redirect someone to a random post on your blog, in a StumbleUpon-like fashion,” explains Hongkiat Lim of Hongkiat.

 

Dynamic sidebar

Sidebars often aren’t given much attention because there is nothing captivating about them. You can change the sidebar content based on the post by creating dynamic sidebars.

 

Apple Accordion sidebar

Your WordPress sidebar becomes an Apple replica within the jQuery UI, via the Accordion plugin.

 

Google Syntax Highlighter for WordPress

This tool brings the Google Syntax Highlighter, created by Alex Gorbatchev, into WordPress.

 

Date image hack

You can build in a calendar view of the dates you post blogs. This hack from YugaTech replaces dates with dynamic images.

 

Individual post styling

By using the_ID, you can create better style differentiation between different articles.

 

Preventing any content duplication

You don’t want to repeat yourself, because it’ll hurt you in the search results. Avoid duplicating any of your posts with this tactic from Weblog Tools Collection, allowing you to show two loops without repeating posts from either of them.

 

Facelift Image Replacement (FLIR) for WordPress

If you want your text and title to be changed into images so they display better, FLIR makes that process simple.

 

del.icio.us for WordPress

This tool simply allows you to present your bookmarks from del.icio.us within your blog.

 

PopURLs

You can create a similar experience to PopURLs on your site using this set of instructions.

 

Prevent specific categories

 You can use two different tactics to block posts within specific categories. One is with Advanced Category Excluder. The other is to insert this script within the loop:

 

<?php

if ( have_posts() ) : query_posts($query_string .’&cat=-1,-2′); while ( have_posts() ) : the_post();

?>

 

Page Redirect template

Part of the idea with WordPress is that you have a tight and specific system, and it’s that very organization that (ironically) gives you so much flexibility. However, it’s sometimes important in terms of the way that your pages display to operate beyond the standard bounds.

 

With this template, you can set a URL for the content, and while the page is loading, the template redirects to the new page, with whatever tags and categories you want to include.

 

Save buttons for del.icio.us

You can create badges that make it likelier for someone to bookmark your blog.

 

DesignFloat submission

This options allows people who visit your blog the chance to “Float” your articles on Design Float.

 

Stumble It buttons

Similarly, you may want to integrate StumbleUpon so people can easily submit to that community.

 

Menu with dynamic highlighting

Via the use of class=”current,” you can change the style and otherwise modify any menu that’s currently selected in CSS, with this code from Lim:

 

<ul id=”nav”>

<li<?php if ( is_home() || is_category() || is_archive() || is_search() || is_single() || is_date() ) { echo ‘ class=”current”‘; } ?>><a href=”#”>Gallery</a></li>

<li<?php if ( is_page(‘about’) ) { echo ‘ class=”current”‘; } ?>><a href=”#”>About</a></li>

<li<?php if ( is_page(‘submit’) ) { echo ‘ class=”current”‘; } ?>><a href=”#”>Submit</a></li>

</ul>

 

The second line of that script indicates that class=”current” is included within <li> if Single page or Search or Archive or Category or Home is selected.

 

The third and fourth lines indicate that class=”current” is added if any page is selected that contains the slug “submit” or “about.”

 

You can also make menu tabs that use categories dynamic with the following bit of code:

 

<ul id=”nav”>

<li<?php if ( is_category(‘css’) ) { echo ‘ class=”current”‘; } ?>><a href=”#”>CSS</a></li>

<li<?php if ( is_category(showcase) ) { echo ‘ class=”current”‘; } ?>><a href=”#”>Showcase</a></li>

</ul>

 

DZone buttons

Similarly, you can allows guests to recommend your articles on DZone while staying on your blog.

 

Reddit buttons

You can use any of various Reddit buttons to better distribute your blog content and increase how much you’re discussed on that platform. You can use buttons with points. You can also modify them by turning off styles, changing the URL, or opening links in a new window.

 

Better archiving

You can use different strategies, as discussed on Noupe, to change the way your archive page is formatted and displayed. Options include:

 

  • Listing every post you’ve made
  • Presenting everything from the year or month
  • Arranging everything within their categories.

 

Addition of breadcrumbs

You can think of breadcrumbs as an additional navigation method that instantly improves your UX. You can essentially take any theme and give it breadcrumbs with this plugin, Breadcrumb NavXT.

 

Buy me a…

You can include a button that implements donations via PayPal, typically used as “Buy Me a Beer” or “Buy Me a Coffee” by independent bloggers.

 

Notifixious instant messaging

When posts are added, you can send notifications directly to your readers on instant messaging, text message, or email via this plugin.

 

Using Xampp with WordPress

You can use Xampp, allowing you to run WordPress locally. “You can also install plugins, upgrade to the latest nightly and virtually anything else confident in the knowledge that if it goes wrong, there is no impact on your actual site,” explains Lim.

 

Presenting Feedburner subscribers as text

Rather than using chiklets, this quick tutorial allows you to present your Feedburner count as text.

 

Landing pages

When anyone comes from a SERP page to your blog, they are zoned in on something specific. Once they hit your page, they will quickly scan a bit and then leave if they can’t see what they need. You can better keep those people on your site by presenting other posts on your blog that also fit their search criteria.

 

Auto-completion using Ajax

By using auto-complete with your search box, it makes it much easier for users to find their way around your collection of content. Here is a how-to guide from WordPress Hacks.

 

High-speed cloud WordPress hosting

All these interesting functionalities above can be powerful to engaging visitors and spreading your message beyond the bounds of your site. However, you also need a hosting service that’s fast enough to deliver your content to meet the expectations of visitors and search engines.

 

At Total Server Solutions, we engineered our cloud solution with speed in mind, and SSD lets us provide you with the guaranteed levels of performance that you demand. Your WordPress cloud starts here.

Posted by & filed under List Posts.

Here is a checklist so you can get started with WordPress smartly, organizing everything intelligently, taking reasonable steps against spammers and hackers, and leveraging strong tools to boost your growth.

 

  • #1 – Set up automated backups.
  • #2 – Check the primary username.
  • #3 – Get WordPress API key & Akismet.
  • #4 – Set up permalinks.
  • #5 – Hook into Feedburner.
  • #6 – Get fundamental plugins.
  • #7 – Install a premium theme.
  • #8 – Dump extraneous themes and plugins.
  • #9 – Add the blog to Google Webmaster Central.
  • #10 – Put up a Contact and About page.
  • #11 – Change your title.
  • #12 – Verify the speed of your hosting environment.

 

It helps to go through a short to-do checklist right after installing WordPress, so your experience with the CMS can be as strong as possible moving forward. The good news is that the majority of these items only need to be performed once.

 

If you are planning to build a lot of websites using WordPress, it’s a good idea to bookmark this page or put together your own to-do list as a Google Doc so you don’t forget any of these items in the future.

 

Wordpress-logo

 

#1 – Set up automated backups.

Backing up your data is about saving and storing it, but also about properly and quickly restoring it as needed. Think about it: if your site goes down, it’s key that you’re able to restore everything ASAP.

 

A plugin you can use for backups is BackupBuddy. You can also discuss the backup policies and options with your WordPress hosting provider.

 

#2 – Check the primary username.

Are you auto-installing WordPress? If so, there is a good chance you will have the username admin. Make sure that you don’t. Otherwise, change it immediately, notes Joe Fylan of Elegant Themes. “Another good idea is to make sure you’re not using your administrator account for non-administrative tasks,” he adds. “If you’re publishing post or pages, create a separate account with author privileges.”

 

#3 – Get WordPress API key & Akismet.

Go to Settings > Akismet Configuration to activate this plugin. It cleans your site of the truly absurd amount of trackback spam that otherwise appears on your WordPress. To get this tool to work, sign up for a free API key from WordPress.com.

 

Along the same lines in terms of limiting spam, you can use Disqus, which replaces the standard commenting system with one that many businesses and bloggers prefer.

 

#4 – Set up permalinks.

Go to Settings > Permalinks to implement this feature. The default option is just to give the page a number, but that’s not good SEO. The best option is customizing. To feature your title in the URL, in the text box for custom permalink, insert this:

 

/%postname%/

 

#5 – Hook into Feedburner.

 

Feedburner unifies all your feeds into one, explains Hongkiat Lim. That way “your subscribers can subscribe to one regardless of its type,” he says. “Feedburner also comes with a chiklet, allowing you to show off subscribers’ figures as well as promote subscription.”

 

#6 – Get fundamental plugins.

Beyond the anti-spam tools mentioned above, you want to have a few other types of plugins in place:

 

Security plugin – You can take many steps to harden the security of WordPress, but consider a strong plugin such as iThemes Security. WordFence is an alternative that also gets very high ratings.

 

Post revisions plugin – It’s easy to make a mistake with a post, so it’s good to have saved revisions. You will likely end up with quite a few revisions stored within the editor, taking up space within the database. If you are just getting started, you can use the plugin Revision Control. If you already have quite a few revisions in your installation, try a plugin such as WP Clean Up or WP Sweep.

 

SEO plugin – The Yoast SEO plugin helps do what you’d expect from better search engine presence: drive more visitors to your site. By improving your SEO, you’ll both provide better content for the search engines and improve user experience. The reason this is a first-and-foremost concern is that you want everything you do to be optimized upfront rather than having to go back and improve this aspect later. The other advantage of Yoast specifically is that it creates a sitemap for submission to Google.

 

Caching plugin – Caching makes your site load faster. That helps tremendously, both with Google and with engagement of users. Options include W3 Total Cache and WP Super Cache.

 

#7 – Install a premium theme.

Consider using a premium theme. Fylan is biased on this front since he’s at a theme shop, but he points out there are typically three major ways in which paid themes are preferable to free options, including better functionality and better support. Perhaps most importantly, though, they are typically more secure. “Individual themes are hacked more often than the WordPress core,” says Fylan. “Using a well maintained premium theme usually means you’re lowering your risk of succumbing to a security threat.”

 

#8 – Dump extraneous themes and plugins.

Once you have your theme and plugins in place, it’s good to sweep out any that you aren’t using.

 

You may think you want to keep around these unnecessary items, to shelve a theme or plugin in case you want to use it again.

 

The problem is that each is a potential way for your site to be exploited. Delete them. Keep your WordPress streamlined.

 

#9 – Add the blog to Google Webmaster Central.

Google Webmaster Central lets you do the following:

 

  • Submit a sitemap to Google
  • Check how your site is indexed
  • View instances of Google bot crawling
  • See statistics on your traffic
  • Diagnose any traffic problems you are having.

 

#10 – Put up a Contact and About page.

If you want people to trust your site from day one, let them know who you are. That simple page is the first place anyone will go who wants to know who you are or otherwise gauge credibility.

 

#11 – Change your title.

The site title is of course critical to your success and growth. Go to Settings > General to locate the title.

 

The title should be about 50 characters (more below on length). It should provide the name of your business and its location, or something similar. In other words, it should say “Bob’s Tire Emporium | Lincoln, Nebraska” rather than “Cheap Tires, Discount Tires, Low Price Tires | Lincoln, Nebraska.”

 

Regarding the length, keep in mind both for the full site’s title and for each page’s title, Google usually displays only 50-60 characters (based on how many fit within a 512-pixel display). Keep your titles under 55 characters, and they will display correctly the vast majority of the time.

 

While this issue is important, it’s worth noting that you actually don’t have full control over it in terms of the search engines. “[S]earch engines may choose to display a different title than what you provide in your HTML,” notes Moz. “Titles in search results may be rewritten to match your brand, the user query, or other considerations.”

 

#12 – Verify the speed of your hosting environment.

As mentioned above, and as is sort of a #1 priority in terms of the Internet, speed will determine your success with WordPress, both with user experience and with SEO. The Total Server Solutions Cloud, with its SolidFire SSD-based SAN-storage backend, is able to provide lOPS levels that are unmatched by virtually any other cloud hosting provider. Order your cloud now.

Posted by & filed under List Posts.

Since the early 1900s, the progression of gaming has reflected the development of technology. Today, trends in platforms and development, such as mobile and cloud computing, indicate how the industry will move forward.

 TSS-8-26-2016

  • The state of platforms
  • The developer perspective
  • Multiple uses of cloud within gaming
  • Additional gaming forecasts
  • Cloud hosting for your gaming company

 

As you may have noticed, gaming has made significant advances since the original arcade games were built toward the beginning of the 20th century. Starting out with pinball machines and advancing toward virtual reality, the history of gaming is essentially a mirror of technological innovations up to the present.

 

It’s of course easier to review what’s happened with gaming than it is to forecast how it will develop in the future. Nonetheless, the technologies that are being used increasingly today give us a good idea of how gaming will look in the years ahead.

 

The state of platforms

 

Mobile

Just like most people have moved on from pinball machines at the arcade, we have also left behind playing Snake on our Nokia phones. The much more sophisticated graphics and gameplay, enabled by the performance of both dedicated systems and cloud hosting, have made mobile gaming much more user-friendly.

 

Mobile games run on various operating systems, with app stores on iOS, Windows, and Android. Android is the best option in terms of access, suggests Kira Bloom in Beta News. “Boasting the highest number of free games, Google Play offers developers much more flexibility in terms of developing for the Android platform,” she says. “Android users are even able to download directly from developers’ sites, which is strictly forbidden on iOS.” The benefit of iOS is stronger graphics, but it isn’t as easy to publish games because of the closed app store – which means there are less offerings for gamers.

 

Computer

As mobile has become more popular, computer games are no longer in nearly as wide of use, although distracted office workers can still get in a quick hand of solitaire. The downturn of the computer game market is indicated by the declining prominence of the Windows store.

 

Developers and gamers have moved away from Windows since the games are incompatible with other platforms, notes Ian Paul in PCWorld. “Windows Store games lack an EXE file so there’s no way to add these games to Steam and manage them there,” he says. “That means you can’t use the Steam overlay while in a Windows Store game, so you can’t access handy Steam features like screenshots or chatting with friends.”

 

While computer games are no longer given as much attention as alternatives, there are advantages. The screen is big, the performance level is high, battery life is more extended, and you can benefit from sites such as GOG. Plus, you can combine the platforms of PC and mobile via Andy OS. Using this program, you can use millions of Android mobile games via a computer. In other words, just because things are trending toward mobile does not mean that those who want to use their computers to play have to lose out.

 

Consoles

The gaming console was initially introduced in the 1960s, so kids who used those first consoles are retiring, and the devices have gone through decades of innovation. Since the console now has had a user base for so long, developers have become more confident about them and have turned to a greater degree of transparency. Gamers can now stream developments, which is both a form of promotion and a way to get a real-time understanding of the community reception: developers can learn what users want as they fine-tune.

 

The developer perspective

 

Coders typically will use the same basic patterns and principles to develop, regardless of the platform, such as mobile, web, console, or VR headset. More than anything, people who create games want the best possible performance.

 

Infrastructure

In the past, games were almost always native. Now, however, developers are increasingly using generalized frameworks, notes Bloom. “Native game development is great in terms of creating a brand new, visually appealing and smooth running game but it is pricey to say the least,” she says. “Building an entire game from scratch requires multiple developers and is time consuming.”

 

Beyond the move away from native development, coders also have access to source code libraries, so there is a strong foundation at the beginning. Source code allows for much greater efficiency, meaning that programmers can zero in on the details, resulting in a richer and more compelling gamer experience.

 

Cloud hosting

Again, the evolution of games basically follows the evolution of technology – and one of the most important recent tech advances is cloud computing. Infrastructure-as-a-service and other cloud tools have matured significantly since their inception, and that makes it a good fit for the style-specific needs of individual coders, notes Bloom. “No longer simplistic, cloud computing is preferable in its complex form as it enables much more flexibility for the developer in terms of storage,” she says. “More complex cloud computing means more efficient coders.”

 

Beyond development: uses of the cloud for gaming

The use of cloud hosting within gaming goes beyond its obvious implications regarding a broader toolset for developers. Here are a few examples from Rick Delgado of Social Media Today:

 

  1. Storing games – Cloud hosting is used by gamers to store copies of their games.
  2. Boosting power – The cloud is used for consoles, to amplify their processing capabilities (as with the 300,000 cloud servers used to improve performance of the Xbox One). For example, physics modeling and cloth motion can be handled via cloud hosting so that gameplay isn’t 100% reliant on a standalone machine.
  3. Delivering games – Currently, games are often either purchased as physical copies or downloaded from dedicated systems. Cloud will fast become the standard delivery model.
  4. Streaming in real-time & updates – Cloud hosting will increasingly be used for real-time streaming to gamers’ TVs and computers. Additionally, games will be expanded and updated over time via the distributed technology (such as AI updates or additional content).

 

Additional gaming forecasts

The virtual reality headset Oculus Rift is new, but sales are high. Due to this success, VR cloud computing could be next – which would mean that the game is streamed straight to the headset. “This makes VR more reasonably priced, thus becoming more accessible to the general population,” says Bloom. “It is an exciting prospect that we not only hope to see in the future, but see as a very viable option for the future.”

 

The Nintendo Miitomo mobile platform is changing the nature of online social interaction. As games are added, social exchanges via mobile games will follow a similar pattern as we see in the extension of physical reality to the virtual one.

 

In terms of coders, college degrees are quickly become unnecessary. Instead, programmers are choosing to go to coding bootcamps. It’s true hands-on learning. Examples are Elevation Academy in Israel, Maker’s Academy in London, and Coding Dojo in Silicon Valley.

 

“With developers graduating from coding courses every day,” says Bloom, “it’s exciting to see what the future will look like in terms of game development.”

 

Cloud hosting for your gaming company

Are you interested in accelerating your gaming business with cloud hosting? At Total Server Solutions, we believe that a cloud-based solution should be scalable, reliable, fast, and easy to use. We do it right.

Posted by & filed under List Posts.

CloudGaming

 

Cloud hosting has disrupted markets and industries worldwide. One example is gaming, which has been influenced by the cloud and will be increasingly impacted by the technology in the future. Let’s look at how it’s been used with development and otherwise, and why IaaS is the best choice for web-based and social games.

 

  • Cloud hosting: worldwide disruption
  • A tool to improve game development
  • Beyond development: uses of the cloud for gaming
  • Delivery of games and updates: the present & future
  • Communicating the power of cloud within your business
  • A partner for your gaming cloud

 

Cloud hosting: worldwide disruption

Cloud has had a major effect throughout business, government, and healthcare. Let’s just take a look at the statistics, from a roundup by Louis Columbus of Forbes. “In 2016, spending on public cloud Infrastructure as a Service hardware and software is forecast to reach $38B, growing to $173B in 2026,” Columbus reports. “SaaS and PaaS portion of cloud hardware and infrastructure software spending are projected to reach $12B in 2016, growing to $55B in 2026.”

Essentially, it is a technology that is disrupting nearly every market across the planet, within just about every industry. The world of gaming is no exception.

A tool to improve game development

Along with becoming much more widely adopted, cloud computing has also become much more sophisticated in recent years. That has direct relevance to the focus of game developers on culturing and facilitating their own development styles. Now that cloud has come of age, it means that developers are much better able to fit its fast and painless storage solutions to suit their needs, explains Kira Bloom in BetaNews. As the capabilities of cloud have expanded, so has the capacity of coders to create games efficiently.

In this sense, the impact of cloud on the development community is, well, game-changing. Rather than having to worry about massive capital expenditure on servers, developers can use the operational expenditure model of cloud. One company that has used the distributed technology to accelerate its operations is KUMA Games, explains Rick Delgado in Social Media Today. The firm “uses the cloud as a way to render graphics and support downloads,” he says, “giving the developer more agility and scalability while focusing on gamer interaction.”

Beyond development: uses of the cloud for gaming

Cloud actually isn’t just about development. It’s used by gamers too, and in other ways.

Probably the primary way in which cloud is used is as storage by gamers themselves. The cloud is a simple, quick, and straightforward way for gamers to store their games in a manner that is incredibly reliable and makes the saved games accessible anywhere.

Another major way in which the cloud has been used is to boost the processing power of consoles. For instance, the Xbox One is backed by 300,000 cloud servers. While that tactic is in use today, the strategy of using the cloud for processing power could be ubiquitous in the coming years.

In order to appreciate cloud as a processing tool, it helps to understand consoles and PCs as interdependent rather than standalone devices. “With the benefit of cloud computing’s added processing power, some of the tasks normally associated with consoles and PCs can be offloaded to the cloud,” says Delgado. “This can easily lead to an overall improved gaming experience.”

How exactly can the cloud help in this way? In the conventional, independent-machine way of gaming, any artificial intelligence and rendering has to be processed directly at the local level. Today, resource-intensive tasks just as physics modeling and cloth motion can be achieved by the cloud instead. The console or personal computer doesn’t have to compute everything as an isolated entity. This shift leads to sharper texture, overall better picture, and more seamless gameplay.

Delivery of games and updates: the present & future

Many people of course still purchase physical copies of games, but the physical model could become discarded because of cloud hosting – notably a more environmentally friendly model. Right now actually, game downloads are often not through cloud but through the publisher’s dedicated infrastructure, which will probably also become an outmoded method as the newer technology becomes more prominent.

The cloud-hosted model is real-time streaming to the PC or TV of the local user. It will be like Netflix with greater interactivity, Delgado notes. “Games may in turn be constantly updated and expanded through cloud upgrades as well,” he says. “A purchased game has the potential to evolve over time, whether its through update to its AI depending on how gamers play or through new content added to the video game world.”

These ways in which the potential of the cloud can be applied to gaming are really just a beginning. As time goes on, and as companies think of new ways to use cloud for competitive advantage, it will become more clear just how deep the effect of the technology will be.

Communicating the power of cloud within your business

Ronnie Regev, now at cloud management company RightScale, was the Ubisoft senior manager of online game operations and architecture for nine years. He came to believe that cloud is a perfect fit for apps with limited lifespans, such as mobile apps, web-based games, and social games. “[Y]ou don’t need to worry about the upfront infrastructure costs of hosting applications or back-end infrastructure,” he says. “You’re able to use what’s appropriate for you at the time, iterate quickly, and if your game isn’t a success, you can easily scrap what’s been done.”

Regev notes that it is key to consider technology in terms of what the company is trying to achieve. Finance will typically be an advocate for the distributed systems, but it is always helpful to lay out explicit financial goals along with projected expenses. Finance pros may need to get a better grasp of the pricing of cloud in comparison to amortizing traditional infrastructure for the coming years, explains Regev. “Finance knows the difference between capital and operational expenses, but maybe not how to budget for that over the lifecycle of a game,” he says. “Helping finance understand the new cloud model ahead of time can make your life easier.”

Plus, gaming firms that have a DevOps flow in place will be able to more quickly deploy code and achieve their goals. In order to realize that benefit at Ubisoft, Regev adjusted the goals of the ops team so that it focused on keeping users connected rather than on uptime of the infrastructure. Unable to control reliable user connectivity by themselves, ops then collaborated with the coders so that new releases could help improve that metric. Regev notes that making this adjustment was challenging but became smoother as everyone got a stronger idea of the DevOps model.

A key thing to remember is that you can take small steps and build on your early successes, Regev explains. “Cloud success breeds cloud success,” he says. “As employees in one part of an organization see the success that colleagues are having with rapid deployment or working closely with developers, they ask to get the same benefits.”

A partner for your gaming cloud

Are you looking to supercharge your gaming business with the cloud? At Total Server Solutions, we engineered our cloud solution with speed in mind, and our SSD storage lets us provide you with the guaranteed levels of performance that you demand. Build your cloud now.

Posted by & filed under List Posts.

Total Server Solutions Cloud

 

Data loss is a major problem, as indicated by an analysis of the raw information used for scientific studies. Businesses are at risk of data loss too, of course. Let’s look at how costly losing your data is and specifically assess the role of multiple disparate locations.

  • Science suffering huge data loss
  • Typical SMB scenario
  • Most enterprises lose data every year
  • Impact of location on business data loss
  • Store data in 3 or more locations
  • Cloud hosting for data loss prevention

 

Science suffering huge data loss

A disturbing article was published in the academic journal Nature in 2013. Elizabeth Gibney and Richard Van Noorden revealed that it’s possible as much as 80% of scientific data could be lost by 2033. Why, in the era of cloud computing and disaster-recovery-as-a-service, should this be happening?

Professors and other researchers admitted that they have research data in many different odd places, such as attics, garages, and even on obsolete floppy disks. Because physical information is often hidden away like that in inaccessible locations, science is losing information at a fast clip.

A review in Current Biology wanted to track down the raw information for 516 ecology studies published from 1991 to 2011. The scientists directly contacted the study authors; as indicated above, the findings were disturbing. “[W]hereas data for almost all studies published just two years ago were still accessible, the chance of them being so fell by 17% per year,” explain Gibney and Van Noorden. “Availability dropped to as little as 20% for research from the early 1990s.”

The solution to this data loss problem, like any, is simple: additional copies, placed in geographically diversified locations. In other words, using cloud hosting powered by a CDN.

Typical SMB scenario

Are you under the impression that merely backing up your company’s information is a solution in and of itself? As indicated by the loss of scientific data, it’s critical that your data is backed up in multiple locations. In this way, geographic diversification is a critical concept for all organizations, in terms of disaster recovery and business continuity.

Let’s look at this from the SMB perspective. You are a small business, and you haven’t yet set up a cloud backup system for your company. Your business goes underwater in a flash flood. You have insurance, allowing you to rebuild. The problem is that your data backup is only 20 minutes away, and that facility is flooded too. You don’t have insurance on that information – and it could be your most valuable asset.

Let’s look at how costly losing your data can be and the role of geographic diversity in lowering risk to your business.

Most enterprises lose data every year

There are two basic possibilities if you suffer extreme data loss: you can either recover through your IT team or other specialists, or the data is completely gone. Just to look at the general overall numbers though, data loss and downtime together cost businesses a massive amount of money each year.

A 2014 study, cited by Eduard Kovacs in SecurityWeek, collected responses from 3300 IT leaders in two dozen different nations. The analysis revealed that enterprises (ie, companies that have at least 250 on staff) lost an incredible $1.7 trillion over the course of the previous year to loss of data and downtime. While there were fewer situations in which data was lost compared to 2012, the sheer amount of data that was destroyed grew by 400% over that same period.

Furthermore, most enterprises lose data annually, according to the study. In the previous year, nearly two in three enterprises (64%) had experienced downtime or data loss within the preceding twelve months. Downtime averaged 25 hours. Just over one-third either took a financial hit (36%) and/or suffered developmental setbacks (34%).

Impact of location on business data loss

Understanding the generally high cost of data loss, let’s move back to the environmental discussion. The fact is that flash floods, tornados, fires, and other natural disasters derail businesses frequently. Here are the top reasons why a company might lose its information, according to a 2015 survey highlighted by Timothy King in Solutions Review:

  1. Hardware or datacenter failure – 47%
  2. Environmental disasters – 34.5%.

Knowing that data is often lost to natural events, the best advice to outfits that are establishing disaster recovery plans is to make sure their data is geographically distributed, says King. “Given that, it would seem obvious that organizations would move in that direction…, in order to apply further safeguarding to their data,” he adds. “Unfortunately, this isn’t the case.”

Where is data backup typically located? For many organizations, it is in close proximity to their business. These portions of different business categories had their backup within 25 miles of their central location:

Government – 46%

Academic – 27.5%

Non-profit – 23%

Private-sector business – 16%.

When a company has its backup within 25 miles, there’s a good chance (although not always of course) that a natural disaster would affect both of the locations. Looking at that possibility, clearly the backup location could not serve its function.

That’s a big problem, says King. “The point to having a secondary server or data storage site is to avoid catastrophes that occur at the main site,” he says. “By having them so close together, the backup becomes almost worthless.”

Store data in 3 or more locations

Your data recovery plan should incorporate onsite backup, offsite backup, and online backup in at least three geographically diverse places, argues Zaid Ammari in Tech.Co. That way your chances of downtime and disruption in customer service are significantly reduced.

While many businesses have multiple redundancies for data, again, the issue is that their backup systems are located nearby – creating high risk. The chance of losing a backup that’s local is .25%. That may sound very low, but it’s “still too risky for critically important proprietary information,” says Ammari. Store data in three locations, on the other hand, and “the probability of data backup survival rises to 99.99 percent, virtually ensuring that the company’s data will remain completely intact regardless of the situation.”

Here are five top points that are often mentioned by security thought-leaders related to disaster recovery and data protection:

  1. Figure out your threats. How might your information be at risk? Know possible problems such as a breach, accidental deletion, file corruption, or a natural event.
  2. Audit what you have. Knowing how to preserve your data in part depends on knowing where it is and who can get to it.
  3. Decide when redundancies are needed. Obviously back up anything mission-critical. Anything that is proprietary or contains sensitive information is certainly high-priority too.
  4. Determine your locations for backup. You want to have a minimum of three diversified off-site locations for your data, notes Ammari. “Files are less likely to be compromised if there are multiple copies stored on various media,” he says. “If a disaster strikes, duplicate copies in separate and distinct locations can help prevent a permanent data loss.”

Cloud hosting for data loss prevention

Are you wanting to better protect your business from data loss through geographic diversity? Distribute your data geographically through cloud hosting. At Total Server Solutions, our cloud uses the fastest hardware, coupled with a far-reaching network. Learn more.

Posted by & filed under List Posts.

One of the best traits that you see touted about cloud hosting is that it offers a distributed network of servers, leading to no single-point-of-failure (SPOF). What are SPOFs, what do they look like in a datacenter, and how can you avoid them on your team and in your technology? Through this discussion, we can better understand how to avoid SPOFs with personnel and why businesses value the anti-SPOF distribution of cloud technology.

 

  • SPOF – what is it?
  • SPOFs in the wild
  • Check-list to SPOF-proof your team
  • Step #1. Figure out who your SPOF people are.
  • Step #2. Think about how to rectify your SPOFs.
  • Step #3. Create redundancies to mitigate the SPOFs.
  • Step #4. Allow your development plan to serve as guidance.
  • Cloud hosting and single points of failure

 

TSS Cloud

 

SPOF – what is it?

You might have heard the term single point of failure (SPOF) in passing but not know the exact meaning. Clearly since it has to do with failure, it’s a key topic in networking and something that every datacenter manager wants to avoid as a top operating priority. Specifically, a SPOF is a vulnerability, arising because of a mistake in the way a system or circuit is set up, deployed, or designed, that makes it possible for one fault to crash the whole system.

 

SPOFs in the wild

If a SPOF exists in a datacenter, it means that the data or certain services can become unavailable just because of a seemingly isolated malfunction. In fact, explains Stephen J. Bigelow in TechTarget, the datacenter can completely go down, if the interdependencies and location are mission-critical enough. “Consider a data center where a single server runs a single application,” he says. “The underlying server hardware would present a single point of failure for the application’s availability.”

 

Think about it: it’s just like a PC that isn’t back up in any way. If the computer dies or gets hacked, that SPOF means you’ve lost all your files. In a similar way, if that solo server goes down, the app either becomes unreliable or goes down with it. People become unable to get into the program. Data could be lost as well, which is both highly frustrating and highly expensive. A basic idea on the floor in the datacenter is to cluster servers so that more than one copy of the program is running; at least one additional server is used in this scenario.

 

If the original machine goes down, the additional one jumps in so that users are able to keep using the app. That simple anti-SPOF technique (and you can get much more complex, of course) essentially means that you can hide a failure behind the scenes, allowing users to seamlessly transition to the new server, unaware of any issues (as occurs standardly in cloud hosting environments, invisible to the end-user).

 

Looking at single-point-of-failure from a different angle shows us how broad this challenge is. Bigelow gives the example of one network switch that supplies networking for an array of servers. That is a SPOF. “If the switch failed (or simply disconnected from its power source), all of the servers connected to that switch would become inaccessible from the remainder of the network,” says Bigelow. “For a large switch, this could render dozens of servers and their workloads inaccessible.” By building in multiple redundancies, in the form of additional network connections and switches, you allow your machines access to a different pathway if a malfunction takes place. That, again, is a basic anti-SPOF method.

 

Someone who is the engineer of a datacenter is tasked with locating and fixing any SPOF instances within the system, at any level. Now, keep in mind that the head of infrastructure cannot properly create the flexibility and redundancy needed without a reasonable budget. Obviously in the situations listed above, you have to pay for the extra physical servers, switches, cables, and network connections. Anyone who is the architect of a datacenter or any system should consider how mission-critical a workload is against the price to rid the system of all possible SPOFs. In most cases, not every system is mission-critical. There are situations in which an architect might decide that it makes sense to intentionally disregard the SPOF and save the money.

 

The other option is to go with cloud hosting to get rid of single points of failure through a broad distribution of servers. Before we get into cloud, though, let’s look at SPOF-proofing your staff.

 

Check-list to SPOF-proof your team

You know to remove single points of failure from your systems, but you may not think to do it with your people as well. That’s important too, advises Tomas Kucera of The Geeky Leader. One of the most often overlooked tasks of any leader is to plan his succession and to ensure he has a plan how to ensure his team works even if he loses a key contributor,” he says. “We are all so submerged in the daily tasks that we often don’t realize that we fail to make the team resilient and disaster-proof.”

 

Here are a few tactics you can use to remove single points of failure within your staff:

 

Step #1. Figure out who your SPOF people are.

Which people within your company are mission-critical? Now, it may seem obvious to point to C-level executives or other leaders. Keep in mind that directors are sometimes easier to replace than others. Really review your people with a few tough questions:

 

  • Does the individual “hold a unique knowledge?” Kucera says to ask yourself. This insight could mean “institutional knowledge, technical or just knowing lots of people that are key to your team survival and no one else knows them or has that knowledge,” he adds.
  • Do they have capabilities that are difficult to replace? That could be a top salesperson, someone who’s a big source of mentorship, or a business negotiator who keeps down your costs and gets you what you need to excel.
  • Is the individual fulfilling a specialized role that is essential to the seamless viability of your team? This person could serve in some ways as a leader even though their official role might not be executive. It can also be someone who’s pleasant or funny and helps with morale.

 

Step #2. Think about how to rectify your SPOFs.

Just like with a single point of failure within an IT system, you must have a mitigation process to remove single points of failure from your staff. However, when it comes to people, your solutions won’t be cookie-cutter.

 

Consider how the flow and/or culture of your workplace might change if each SPOF person were to quit or otherwise stop showing up to work. What types of insights, capabilities, or roles would need to be filled by another party? How would the business be harmed, on any level (internally and externally)? Think about today and about next year.

 

As you consider these VIP people, look also at your entire workforce. Are there colleagues who share some of the rare qualities of the SPOF? If not, you need redundancies; it might be a good idea to hire.

 

Step #3. Create redundancies to mitigate the SPOFs.

Did you find a colleague who might be a reasonable backup person? You need to make sure that second person is trained as a potential replacement. Create and closely monitor a development plan to share knowledge as an anti-SPOF maneuver.

 

Step #4. Allow your development plan to serve as guidance.

You want this development plan to be central to your overall team’s development. Are you going to give someone a new set of work? Are you considering your organizational structure? The SPOF development plan should be reviewed. Any time you make adjustments, try to eliminate SPOF instances. Generally, make sure you aren’t assigning everything to the same top individuals. When you rely heavily on a few people, they become single points of failure. It makes the organization less flexible and more vulnerable.

 

Moving forward, update the list every few months.

 

Cloud hosting and single points of failure

A strong cloud hosting infrastructure is decidedly built to be anti-SPOF. Single points of failure no longer need to be part of your company’s technological foundation. At Total Server Solutions, our cloud uses the fastest hardware, coupled with a far-reaching network, making everything easier and SPOF-free. We do it right.

Posted by & filed under List Posts.

 

<<< Go to Part 1

 

  • Tips & issues when adopting SSL (cont.)
  • Do the benefits of site-wide SSL outweigh the issues?
  • Extended validation SSL: what is it?
  • Other takeaways from site-wide SSL experiment
  • Netcraft SSL Survey – brand popularity
  • Market share of SSL certificates
  • Validation categories as percentages of the market
  • Securing your site with SSL
AISLE-1

 

Tips & issues when adopting SSL (cont.)

Speed: The encryption and accessing of the key that is necessarily part of a private connection is going to slow down your site a bit. You can implement SPDY (an open source protocol developed mostly by Google) to adjust the processing of HTTP traffic for a little acceleration; however, the latency is something you want to consider in contrast to the obvious advantages of site-wide SSL.

 

Do the benefits of site-wide SSL outweigh the issues?

Clearly site-wide SSL is not entirely positive. However, here are a few reasons it makes sense regardless of the challenges it presents, from Web developer Andrea Whitmer – and these are simply effects she noticed from a case study of her own site:

 

  1. Her bounce-rate went down, she assumes because people immediately trusted her site more. Now, let’s not gloss over this detail. A reduction in bounce rate is important – it’s a factor typically listed in top 5 and top 10 lists of key metrics for online success. (In fact, Tony Haile of Chartbeat says that 55% of visitors will spend 15 seconds or less on your site.)
  1. There were fewer questions from people related to payment. In other words, people moved more seamlessly through the sales funnel.
  1. The process was helpful simply in terms of testing.

 

It is also worth noting – actually it’s very important – that the type of SSL certificate Whitmer was using was an extended validation (EV) cert. Let’s address what an EV certificate is briefly.

 

Extended validation SSL: what is it?

OK, so a secure sockets layer certificate will encrypt transmission on pages of your site where it is implemented, but it does something else: validates the website owner for better credibility. That’s why an extended validation certificate is often sought by site owners. It isn’t valuable for a higher degree of encryption but for a higher degree of validity and, in turn, trust.

 

This is visual and obvious. If you have ever been to any site that has EV active, such as PayPal, you will see the address bar turn green and the name of the verified company appear in your browser. These elements are additional to the lock symbol and https protocol. There are numerous case studies by Symantec and others, but the positive impact should be obvious just considering buyer psychology and the importance of online trust. Here’s an example: Overstock.com saw an 8.6% reduction in shopping cart abandonment in a Symantec case study.

 

However, there is another aspect that is helpful as well, according to the nonprofit Certification Authority Browser Forum (CA/B Forum) – the industry group that defines extended validation parameters. “The secondary objectives… [of certificates] are to help establish the legitimacy of an entity claiming to operate a Web site,” says the organization, “and to provide a vehicle that can be used to assist in addressing problems related to phishing, malware, and other forms of online identity fraud.”

 

Specifically related to phishing, consider this: if a site uses phishing and accurately mimics your site to steal your or your customer’s information, the green address bar and business name supplied by an EV SSL may be the only way for someone to tell it’s your site. What that means is that you could prevent phishing attacks, one of the major forms of online fraud, by instructing users (perhaps through a notice on the site) to only proceed if they see the EV indicators populate.

 

Other takeaways from site-wide SSL experiment

Whitmer notes that she was at first skeptical about whether site-wide SSL would help in the search engines (since it does improve your search rankings, according to Google itself) because there weren’t immediate improvements as she’d thought there would be. Nine months after she transitioned, she had much better search traffic than she did previously; but she points out that there were many other changes to the site made in the meantime that could have also boosted her rankings.

 

All in all regarding search rankings, she said that it could be a good tactic if you set it up in the right way – although this aspect obviously isn’t a benefit that she can strongly argue.

 

In closing, Whitmer does advocate site-wide SSL for anyone with a site that is similar to hers. “For me, sitewide SSL has been worth the effort because of my future plans for my business,” she says, “as well as the current pages on my site using forms to collect information from visitors.”

 

Netcraft SSL Survey – brand popularity

As touched on in the first part of this piece, Netcraft conducts a monthly SSL Survey, assessing the number of SSL certificates that exist on public-facing websites. Again, the numbers from its survey account for the total number of certificates – not taking into account that the same cert is sometimes used on multiple sites (which creates browser errors anyway and is not considered valid use).

 

Market share of SSL certificates

As of January 2015, nearly one-third of SSL certificates were Symantec brands (Symantec, GeoTrust, Thawte, or RapidSSL). GoDaddy was in the second position, and Comodo in third. Those three SSL providers supplied the vast majority of certificates – accounting for greater than 75% of the market. Other brands followed in this order: GlobalSign, DigiCert, StartCom, Entrust, and Network Solutions.

 

Note that all of the certificates we sell at Total Server Solutions are from the industry’s most trusted brand, Symantec.

 

Validation categories as percentages of the market

There are three types of assurance that are standardly recognized within the industry – and as such, supported with the given parameters of the validation type by all the major browsers (via their agreements within the CA/B Forum, mentioned above).

 

Domain-validated certificates simply validate control over a domain name,” notes Netcraft. “Organization-validated certificates include the identity of the organization; and Extended Validation certificates increase the level of identity checking done to meet a recognized industry standard.” The shorthand for each of these SSL certs are DV, OV, and EV.

 

The domain-validated cert is the last expensive. Since businesses probably vastly under-value the role of an SSL certificate in terms of adding credibility and trust to their site, this cheapest variety is by far the best seller with nearly 70% of all sales. Meanwhile, extended validation, the most expensive but least appreciated cert, represents under 5%. The rest are OV.

 

Now, just consider this argument that the EV SSL is the way to go even though it is currently the least popular version: As mentioned above, Symantec does a case study of Overstock.com, via an independent third-party research group, and finds that there is an 8.6% decrease in abandoned shopping carts in EV-enabled browsers.

 

Consider that Overstock is already a highly recognized brand (so assumedly the credibility boost is lower than for most sites) and that this was essentially a split-test. An EV cert costs less than $300 more per year with a top brand from Symantec, GeoTrust. Simply put, if you do the math, this investment often makes sense.

 

Securing your site with SSL

Are you interested in what site-wide SSL might do for your conversion rate or bounce rate? Or do you just need a cert to encrypt your logins or ecommerce? Keep your transactions and communications secure with our SSL certificates at Total Server Solutions.

Posted by & filed under List Posts.

Have you considered using an SSL security certificate on your website? This technology has been growing exponentially since it was first introduced in the mid-1990s. Let’s look at whether SSL might be right for your site, the issue of partial vs. complete implementation, and some thoughts on common adoption issues.

  • What is an SSL security certificate?
  • Study: Certificate growth rapid across the Web
  • Should you have SSL on your site?
  • Is site-wide SSL right for you?
  • Tips & issues when adopting SSL
TSS

 

What is an SSL security certificate?

The SSL/TLS protocol (which stands for secure sockets layer / transport layer security) is a simple way to secure exchange of information online. Accepted and promoted by all the major browser and operating system companies, certificates that follow its standards are responsible for the HTTPS protocol; lock icon; and, in some cases (with specific types of additional validation), green indicators and/or company validation. Through these means, which populate automatically once this technology is installed, you are able to establish a private connection with whomever uses your system.

 

In order to get an SSL working with your site, you are essentially coupling whatever domain or subdomain you designate with a cryptographic key. No matter what level of certificate validation you purchase (domain, organization, or extended), the verification and connection of your site is performed by a certificate authority (CA). The CA signs the certificate so that anyone visiting your site can (if they choose) check the firm behind your security mechanisms.

 

Study: Certificate growth rapid across the Web

Brand-name SSL certificates are the majority of the ones found online. Although generic certs can be used to encrypt, they generate browser error messages since they are unofficial. The good news is that while there is a range of SSL certificate prices, there are certainly options to fit every website’s budget.

 

Probably the primary ongoing analysis of SSL adoption is the Netcraft SSL Server Survey. It “has been running since 1996 and has tracked the evolution of this marketplace from its inception,” notes Netcraft. “[T]here are now more than one thousand times more certificates on the web… than in 1996.”

 

Looking at the simple number of certificates is the easiest and clearest way to gauge adoption, although it should be understood that sometimes the same certificate is used on multiple sites. Again, as with non-brand certs, you will get browser messages that don’t mean your SSL chain is broken but that visitors cannot validate site ownership, causing obvious trust issues.

 

Should you have SSL on your site?

Is it a good idea to get an SSL certificate for your site, for a portion of it or the entire thing? Let’s look at that issue.

 

The first thing in favor of using SSL is that Google and other search engines now give you a better ranking if you implement the technology. That’s one factor in favor of site-wide use.

 

Do you do ecommerce on your site? Then you want one, notes Web developer Andrea Whitmer. “If you’re taking credit card payments directly on your website, you definitely need SSL in place to encrypt your customers’ credit card information,” she says. “However, that doesn’t necessarily mean you need it on your entire site.”

 

OK, so why would you want it on just a part of your site? SSL encryption does, as you can imagine, slow down your speed a bit, which can be a hit to your user experience (obviously countered with the positive UX of the security you’re providing). You might want it in your shopping cart, for example. However, you don’t need one for PayPal purchasing (since PayPal itself takes care of the SSL).

 

Not everyone has their own ecommerce app, but you probably have a way for people to create user accounts. Assuming that’s the case, you do want encryption for those logins so that the accounts actually are private and safe. “After all, your members are giving you their email addresses, names, and passwords, all of which they likely use on other sites,” says Whitmer. “Do you really want to risk being responsible for a security breach that results in your members’ information being spread across the whole internet?”

 

Even if people don’t have accounts, you similarly will want an SSL certificate if people are sending or uploading personal data through a form. If you have forms in addition to logins and shopping, you want a cert on each of those site areas. If they are within subdomains, you can get a cert type called a wildcard that covers unlimited subdomains. Otherwise, a standard certificate covers just one domain or one subdomain.

 

Generally speaking, businesses that are only posting content do not bother with an SSL certificate. That’s because there is no particularly sensitive information changing hands and no need to comfort someone with a reputable-brand cert while they proceed through a sales funnel.

 

Is site-wide SSL right for you?

 Here are three advantages to applying SSL technology on your entire site rather than just sections in which users are logged into the system or buying:

 

  1. User confidence. Everyone feels a little uneasy when they are getting ready to put payment information or even a physical address into the system of an unfamiliar company. It’s easy to reduce the fear of using your site or making a purchase by adopting SSL – so guests can be comforted (for good reason) by the lock icon.
  2. You may just want to figure out if your site traffic and engagement will or will not improve with SSL. You can implement for a rough idea, or split-test for greater clarity.
  3. The future-proofed site. If you think you are going to want areas of your site in the future that will use this technology, it can be a good idea to set up a cert well in advance rather than being unfamiliar with this component moving into a launch.

 

Tips & issues when adopting SSL

  1. Social numbers: Sometimes people will experience serious frustration when social shares, and the social proof that impresses potential buyers, are deleted in the process of transitioning to the new protocol. For those situations, if you are on WordPress, there is a plugin called Social Warfare that allows you to get back any share data that floats away.
  2. Social plugins: The plugins are often not secure, and when their setting is switched over to https, you can end up with a number of glitches. This requires troubleshooting.
  3. Internal links: currently your site is linking to http versions of the site. Once the sites becomes https, you will need everything to forward to the secured version. A 301 redirect is a quick fix, but you may not want to rely so heavily on a redirect – in which case, you simply need to add the s (following http) within the URLs.
  4. Additional plugin problems: Many plugins were not built to work correctly with https. Expect to potentially have to switch to a different plugin or at least to get a patch enabling you to use the SSL without error messages.
  5. Webmaster Tools: “[R]emove and re-add your site in Google’s Webmaster Tools (or at least do a change of address) and submit a new sitemap to force re-indexing of your site using https,” notes Whitmer. Be aware that when you submit the new map, you may see your traffic temporarily decline.

 

Should you get an SSL cert?

Maybe you know exactly what you need in an SSL certificate. If not, we can advise you further. At Total Server Solutions, our expert team is made up of individuals with the highest levels of integrity and professionalism, allowing us to guide you in the direction of a comprehensively optimized website. See our SSL Certificate Options.

Posted by & filed under List Posts.

Cloud is fast becoming the go-to solution for business computing systems, replacing the traditional legacy model. How can you make the most of a cloud transition?

  • Cloud becoming the dominant business technology
  • Tip #1 – Consider your goals.
  • Tip #2 – Scrutinize your options.
  • Tip #3 – Look at your current investment.
  • Tip #4 – Dip a toe at a time.
  • Fast, reliable, scalable cloud
cloud-tss

 

Cloud becoming the dominant business technology

Are you looking at how your company should be spending its computing budget? The extent to which organizations are committing to the cloud really is kind of stunning. In fact, more than nine out of ten businesses (93 percent) have implemented some type of cloud solution, according to the annual RightScale State of the Cloud report.

 

As would be expected as the the industry becomes more developed and mature, companies are introducing more complexity into their cloud services. That’s because many companies are choosing to blend different options. More than four in five firms (82%, a rise from 74% in 2014) say that their cloud is a hybrid – an integration of a private cloud (hosted on-site or through a third party) with a remote public cloud in an independent data center.

 

Of course there has been a huge amount of hype surrounding this technology, but it’s also central to a very real computing revolution – a move to the third platform (cloud, mobile, social, and big data). Generally speaking, information technology has experienced “a shift toward purchasing virtualized, digital services that replace physical equipment,” reports the Wall Street Journal.

 

As cloud becomes more prevalent, the conversation about its general benefits becomes a discussion of how to migrate successfully.

 

Tip #1 – Consider your goals.

It’s important to know the position of your business and how you are intending to become better via cloud adoption. Becoming more agile and flexible (so you can adapt quickly to changing marketplace conditions) is the biggest advantage, according to the Open Group. Here are five other primary benefits:

 

  1. Cut your costs
  2. Consolidates your systems and make them easier to manage
  3. Gives you access at any location you have Internet
  4. Allows you to work more easily and immediately with others (internal and external)
  5. Is the sustainable choice because it’s designed for optimal infrastructural efficiency, with lower power use.

 

Tip #2 – Scrutinize your options.

You want to gauge different providers from every possible angle, of course. Look at these parameters:

 

Security

 

“What you want to know is how the cloud provider manages data security, its history of regulatory compliance, and its data privacy policies,” says Business.com. “If a cloud service has clients that deal with confidential and sensitive information you can have some degree of confidence they’ll handle your data in a similarly secure fashion.”

 

In other words, you want to look for PCI compliance and an SSAE-16 Type II audit, signs that the cloud abides by strict IT standards. Also check for testimonials or other reviews.

 

Affordability

 

It can be a little tricky to figure out exactly what a cloud service is going to cost. Make sure that your service-level agreement (SLA) is clear and properly protects you. Ask whatever questions you may have so that you aren’t caught off-guard.

 

Public, private & hybrid

 

Private clouds are sometimes preferred by organizations for compliance or to have the utmost possible control of the system. It also allows businesses to customize parameters as needed. The primary issue with a private cloud is that it is expensive, because you aren’t leveraging the same economies of scale as you are with the public version (since that one involves multiple clients). It’s easier to scale public cloud also – which is helpful not just for business growth but for seasonal businesses and even common peaks such as Black Friday.

 

You are essentially able to use an operating-expense rather than a capital-expense model. You pay for what you need, the actual amount of data you need to process. The cloud service provider (CSP) keeps the system properly up-to-date and safe, which in turn means you can focus on their core business.

 

Tip #3 – Look at your current investment.

The cloud is probably most attractive to startups simply because there’s so little upfront expense. Some companies already have their own data centers, though.

 

Brian Posey notes in TechTarget that companies often leave behind their legacy architecture, in part because it is always on the road toward decay. “Outsourcing a server’s data and/or functionality to the cloud may mean abandoning your on-premises investment unless an-on premises server can be repositioned,” he says. “No matter how good it is, any server hardware eventually becomes obsolete.”

 

Large companies understand that their infrastructure will eventually no longer be usable, of course. The standard way to build equipment’s aging process into the business plan is through a hardware lifecycle policy. A very straightforward one, for instance, would be to get rid of all servers once they have been deployed for five years.

 

Keep in mind that cloud is not an either/or proposition. Many organizations choose to interweave their lifecycle policy with their adoption of cloud. This simple step makes it possible for IT teams to switch from on-site servers to cloud rather than buying updated equipment.

 

That’s also evident in the hybrid cloud scenario, which is fundamentally an integration of private and public cloud components (with the private cloud either on-premises or hosted in a third-party data center). Some companies choose to keep certain systems in their own facility because the process of redesigning and testing them for cloud doesn’t make sound business sense immediately. While older applications typically involve more debate, new apps are more often built for cloud without hesitation.

 

While traditional computing is still used for portions of many companies’ infrastructures, you do want to explore whether it makes sense to keep any legacy systems in place at all. Patrick Gray of TechRepublic thinks that cloud is quickly become the successor to the dedicated approach to computing. “Cloud computing has completely revolutionized several sectors that were once dominated by large and expensive legacy applications,” he says. “The CRM (Customer Relationship Management) space is a major example, where companies can now provision an enterprise solution with a credit card.”

 

Just that one example means that firms don’t have to handle a major capital expense. Plus, you don’t have to get specialists to assess the type of equipment you need and engineers to set it up in your data center. You can see the sea change that can occur when you decide that you will no longer be focusing internally on maintaining your own raw infrastructural resources for computing.

 

Tip #4 – Dip a toe at a time.

One thing you want to remember about cloud, as indicated above, is that it doesn’t require you to toss your current hardware. In fact, one reason people are so attracted to the technology is because you can access whatever amount of computing power you need, changing it as you go.

 

Gartner analyst Elizabeth Dunlea says that the best way to approach cloud is to think of it in terms of the needs you are meeting as opposed to a collection of technological components. “By tackling one service at a time, it’s easier to measure what worked and what didn’t,” she says. “This is where best practices are drawn for future deployments.”

 

Fast, reliable, scalable cloud

Are you in need of cloud hosting that meets your expectations for this revolutionary, highly touted technology? At Total Server Solutions, our ultra-fast hardware and far-reaching network make everything easier and more transparent. We do it right.