Posted by & filed under List Posts.

Atlanta, GA (May 9, 2019)TSS understands that technology is the foundation of innovation, and that technology is more impactful and sustainable when it is inclusive.  With that vision, Total Server Solutions (“TSS”) aims to scale their support of Women in Technology (“WIT”) alongside the growth of their company.

Besides encouraging employees to attend WIT conferences and networking events, TSS sponsors a monthly local WIT meeting hosted and run by Atlanta Tech Village (“ATV”). At this WIT event, women from the ATV startup community and beyond enjoy lunch while discussing topics that further ATV’s Women + Tech mission of “empower(ing) women in tech through a kick-ass network, intentional teaching & purposeful community because it does take a village.”

TSS has also recently become more involved with in The Indus Entrepreneurs (“TiE”) Women’s Council, a group that distinguishes itself from others by focusing on and fostering entrepreneurship and intrapreneurship in women. Kelly Wong, Director of Product at TSS, has been inducted into the core council of the group and intends use the platform to further TSS’ vision of innovation through inclusion. Kanchana Raman, head of the TiE Atlanta Women’s Council and President and CEO of Avion Networks, said, “Kelly attended the launch of the Women’s group at TiE Atlanta at the recommendation of her CEO Gary Simat, but since then she has been deeply committed to growing the group and helping in any way she can. Her enthusiasm is commendable. We are so excited she will be part of our core group in serving TiE Atlanta Women’s initiatives.” Because of his avid support of women’s causes, TSS CEO, Gary Simat, has been asked to speak on a panel at the TiE women’s event being held July 25th, 2019. Simat is no stranger to TiE, having been awarded an Upper Market Top Atlanta Entrepreneur in 2018.

To ensure that TSS maintains the momentum of their Women in Tech initiative, Simat and Wong are developing the TSS Women’s Council (“TWC”). The TWC will provide TSS women a formal forum in which they can discuss workplace topics that are relevant to women and identify opportunities where TSS can have a positive impact on the greater Atlanta WIT community. “Professional women’s groups have been integral to my career path so it’s really important to me that I ensure that other women have the same resources. TSS is already a great place for women to work,” said Wong, “through this group we’ll strive to make TSS the best place for women to work.”

“In this day and age, supporting women in business is table stakes. At TSS, we want to take our support a step further by internalizing the value of women’s topics in our company culture,” says Simat. “We aim to be advocates and thought leaders in the WIT space, starting within in our own walls, by encouraging and empowering women to make bold decisions within TSS and their own careers.”

Stay tuned for additional information regarding future TSS WIT announcements and events.

RSVP here for ATVs next Women + Tech Meetup:




Kelly Wong

Total Server Solutions

+1 (855) 227-1939 Ext 628


Tucker Kroll

Total Server Solutions

Posted by & filed under List Posts.

Atlanta Business Chronicle Leadership Trust is an Invitation-Only Community for Top Business Decision Makers in the Atlanta Region

Atlanta, GA (March 19, 2019) — Gary Simat, CEO of Total Server Solutions, has been invited to join Atlanta Business Chronicle Leadership Trust, an exclusive community for influential business leaders, executives and entrepreneurs in the Atlanta area.

Gary was chosen for membership by the Atlanta Business Chronicle Leadership Trust Selection Committee due to his experience, leadership, and influence in the local business landscape and beyond. Gary is presently the founding CEO of Total Server Solutions, an Atlanta based managed infrastructure company where he and his team of 105 engineers aim to provide stable, secure, and scalable solutions for their clients around the globe. Recently the company has been recognized for several awards such as being listed on the Inc. 5000 for 2017 and 2018, and Atlanta Business Chronicle’s Pacesetter 2018. Gary was recognized in 2018 as an Atlanta Top Entrepreneur – Upper Middle Market via the TiE Atlanta Organization.

“Atlanta’s thriving business community is powered by leaders like Gary,” said David Rubinger, president and publisher of Atlanta Business Chronicle. “We’re honored to be creating a space where the region’s business influencers come together to increase their impact on the community, build their businesses and connect with and strengthen one another.”

As an invited member, Gary will contribute articles to the Atlanta Business Chronicle website and participate alongside fellow members in Expert Panels. He will connect and collaborate with a vetted network of local leaders in a members-only directory and a private forum on the group’s mobile app. Gary will also benefit from leadership and business coaching, an Executive Profile on the Atlanta Business Chronicle website, select partner discounts and services and ongoing support from the community’s concierge team.

“It’s exciting to be part of an elite group of Atlanta based entrepreneurs.  There is vast amount of experience that can be shared all around the table. I look forward to assisting in bringing up the next generation of entrepreneurs in cooperation with some of the other members.” Said Gary Simat, CEO of Total Server Solutions.

The Atlanta Business Chronicle Leadership Trust team is honored to welcome Gary to the community and looks forward to helping him elevate his personal brand, strengthen his circle of trusted advisors and position him to further impact the Atlanta business community and beyond.

About Business Journals Leadership Trust

Atlanta Business Chronicle Leadership Trust is a part of Business Journals Leadership Trust — a collective of invitation-only networks of influential business leaders, executives and entrepreneurs in your community. Membership is based on an application and selection committee review. Benefits include private online forums, the ability to publish insights on, business and executive coaching and a dedicated concierge team. To learn more and find out if you qualify, visit

About Total Server Solutions

Total Server Solutions is an IT services company with 31 PoPs across the globe focusing on connecting businesses to their customers and the data they need… anywhere. The TSS platform includes Managed Colocation, Dedicated Server, Managed Private Cloud, Bare Metal Server, CDN, DRaaS, Backup & Recovery, and a low latency, high performance network enabling customers to securely and seamlessly move workloads anywhere in the world.



Gary Simat
Total Server Solutions
+1 (855) 227-1939 Ext 649

Tucker Kroll
Total Server Solutions

Posted by & filed under List Posts.

There are many ways that ecommerce companies make mistakes. Key ones include a failure to focus on social media, requiring account creation to buy, racing through the images, not centering on your products, and relocating a frequently used site feature.

Ecommerce mistakes of all types first come from myths or misconceptions held by ecommerce professionals. Here are some of the key myths within online retail, and how to correct those false notions for better all-around performance.

Myth: You can make money online simply by setting up shop.

Truth: Actually, you need to find a way to reach customers, not just set up a site – at least if you want your traffic to be finding its way to you through the Internet. An ecommerce site cannot simply be pictures of products with descriptions and prices. There must be substance on the site in the form of content. There should be information that helps people solve their issues and get solid advice.

A key early misconception was that store leases and inventory costs could be completely avoided as this digital revolution proceeded. The idea at that point was that you could simply rent out a bunch of warehouses, put them in carefully selected locations, and then ship via hub-and-spoke (or nationally if desired).

The notion that setting up an online store was enough to succeed in and of itself is problematic because reach is useless “as long as the reach does not translate to visibility, discoverability, and real transactions,” said T.N. Hari.

Myth: The way to win in ecommerce is price.

Truth: Price is certainly a top question people have when they shop for just about anything. However, other elements will be involved when someone is looking at options. “Truth is,” said Tim J. Smith, PhD, “customers don’t buy because of price alone.”

Trying to win on price typically requires substantial buying power that is only available to the largest market players. The way that a new business within ecommerce can succeed is typically in terms of customer service, and that might involve the following:

  • Intent and desire to satisfy customers
  • Great usability with the shopping cart and website
  • Personalized service
  • Quick response time to customer requests
  • Feedback and rating channels for customers
  • Strong presence on Facebook, Twitter, and YouTube, among other social platforms.

Myth: Ecommerce is all about disruption.

Truth: The biggest ecommerce firms are all huge corporations. Startups can make an impact or go after niches, but the behemoths will likely maintain most of the top positions (in order, Amazon, Walmart, Apple, Home Depot, and Best Buy), as indicated by Dennis.

The truth is that most of the huge ecommerce names are Sears, Staples, Lowe’s, and other long-time household-name brands.

Myth: Once you have a concept, scaling ecommerce is simple.

Truth: The incredible scalability of modern technology, when applied to ecommerce settings, can allow you to scale them very efficiently. Of course a brick and mortar business will take longer to build because there is physical construction, local staffing, and various other additional costs.

Ecommerce scaling challenges are real, though. “Much of this can be traced back to the ridiculously high… costs of customer acquisition,” said Dennis, “as well as what often turn out to be expensive and/or complicated issues stemming from the high rate of customer product returns.”

The issue with returns goes back to a core weakness of ecommerce that we all know is unavoidable: you cannot pick up the product, see it, and touch it in person. Additionally, a malicious customer might say that the product arrived, with the ecommerce store not having a system with which they can adequately prove delivery.

Related to customer acquisition, the key way to solve that is to better cover your margins and build in sustainability with retention through differentiation.

Finally, it is expensive and complex to develop a logistics network that is affordable to run.

Myth: The only KPIs you need to focus on are profit and revenue.

Truth: During the early days of the Internet, it was unnecessary for anyone to have to pitch anything to a board because most online shops were single-person endeavors. For that reason, ecommerce professionals were initially known for sometimes pointing to numbers that did not necessarily translate to business concerns. Profit and revenue became more core to the field as justifications for spending once web marketers started joining larger organizations.

Today, revenue and profit are now so much at the forefront that every other possible measurement may be neglected.

There are various other key performance indicators that matter. Asset lifetime value, conversation rate, and channel traffic are top ones recommended by Pratik Dholakiya.

Myth: Email is obsolete.

Truth: Another top pernicious myth is thinking email is no longer a key online tool. Using texting or messaging may help you contact an individual; however, people are less likely to open messages from businesses through that medium. A social post can also not be expected to get to all your customers. For these reasons, Dholakiya noted in 2018 that “email is still the default communication tool for connecting with people over the Internet.”

There is an inbox that must be emptied with email. That means the email accountholder will probably see most of the subject line at minimum.

The truth is that you need to carefully strategize how to build your email list – the opposite of avoidance.

Myth: Third-party security is standardized.

Truth: You undoubtedly get many outside services to fulfill business functions, such as infrastructure (i.e., web hosting, infrastructure-as-a-service, or IaaS if cloud) and software (software-as-a-service, or SaaS, if cloud). These companies are not all created alike – again, think differentiation as noted above.

However, a key concern is these scenarios is security. While security best practices may be generally accepted across the industry, the protocols that are used by a specific provider vs. another will differ.

There is a way to know if the providers you are considering do follow security best practices, and that’s by leveraging the specifications of an additional third party: the American Institute of CPAs (AICPA). The AICPA is known for its Statement on Standards for Attestation Engagement 16 or 18 (the latter a lightly updated version of the former).

By looking for ecommerce systems that are both PCI-compliant and audited to meet the parameters of the nation’s top accounting association, you can know that you are conducting true security due diligence.

Myth: Your hosting speed does not matter.

Another top aspect of a site that can be neglected is the speed of hosting. The servers must be functioning optimally.

Your speed is fundamental. Online users are not known for having very long attention spans – they want to get what they need and move on. Your credibility will suffer if your site does not load rapidly, and your satisfaction will plummet. When your site gets slower, page views, customer churn, and bounce rate will drop along with your revenue. People can simply get to your competitors too quickly.

As page load rises by fractions of a second, so does bounce rate, according to studies.

Now, of course, speed is not just about the servers and network but other aspects. For instance, optimizing your alt-text will help improve your performance site-wide.

Your ecommerce partner

Do you want to build ecommerce smartly and securely? Critical to your site are speed and security of your hosting. At Total Server Solutions, we’re your best choice for comprehensive ecommerce solutions, software, hosting, and service. See our secure ecommerce solutions.

cloud security -- locked data and devices

Posted by & filed under List Posts.

<<< Go to Cloud Security 101 – Part 1 – Key Threats

Now that we have looked at critical cloud security threats, we examine key defensive measures to protect yourself from them.These recommendations are by no means exhaustive but represent top points of focus.

Secure your client devices.

You want to implement firewalls to protect the perimeter of your network and to deploy advanced endpoint security, as indicated by Dr. Rao Papolu (PhD).

Understand cloud security models.

You can use different tools or models in order to conceptualize and systematize security. Common model types, per the Cloud Security Alliance (CSA), are design patterns (reusable ways to resolve certain issues); reference architectures (templates through which to deploy protections); control models (information on and organization of certain security controls); and conceptual frameworks (description and images of cloud security principles).

Models that are endorsed by the CSA include:

Focus on three-pronged access management.

Key capabilities of access management are the creation and enforcement of access policies; assignment of access rights to users; and user identification and authentication:

  • Set up access policies – Cloud service providers provide content delivery services, virtual disks, blob storage, and other storage services. Access policies should be service-specific – unique to that service, said CERT security solutions engineer Don Faatz. This specificity of access policies underscores the importance of choosing a flexible provider who can both help you get the right information and design your systems to align with security best practices.
  • Assign access rights – You want rights and privileges to be assigned appropriately. Rights must fit roles; and as a whole, roles should make certain that no single individual is able to negatively impact the complete virtual infrastructure. Determining rights is about figuring out the roles applicable to consumer and shared responsibilities. Hemming in what a system manager or developer can do is achievable through role-based access control. Through this method, you limit system managers to designated resources and make it so that developers can only access the projects assigned to them. “Limiting access can limit the impact of a credential compromise or a malicious insider,” noted Faatz.
  • Identify and authenticate – A nefarious party could potentially steal the credentials of someone with special privileges and use them to control and change cloud setups. By introducing an additional factor to get into the account, you lower the chance of intrusion by forcing users to take additional steps. Hence, multifactor authentication (MFA) is critical.

Use data classification methods.

You want to know about the data as you consider what you need in security protections. Data security is becoming more and more intricate as there is an increasing amount of unstructured data being handled by companies. “Treating [all data] the same is a recipe for security failures inside or outside a cloud environment,” noted Samuel Greengard, author of The Internet of Things (MIT Press, 2015).

Value of data protection rises in the context of compliance needs such as the General Data Protection Regulation (GDPR), Health Insurance Portability and Accountability Act (HIPAA), and Sarbanes-Oxley (SOX). Plus, risk tolerance should generally be considered when assessing cloud, with additional security added based on your assumed value of the data.

Use a straightforward cloud security process model.

Depending on the specific scenario of a single cloud project, you will have different design models, processes, controls, and setup specifications. However, the process can typically flow as possible, noted the CSA:

  • Assess and list the current controls, along with compliance and security needs.
  • Determine your implementation model, cloud service provider, and plan.
  • Decide on specifics of your architecture.
  • Review present cybersecurity controls.
  • Locate any gaps in controls.
  • Resolve controls to close the gaps, and set them up.
  • Monitor and adjust as time passes.

It is important to understand who the provider is before you begin thinking of what you need in controls. Then you are able to seamlessly look at requirements, construct your architecture, and then see what the gaps are that you must address.

It is key to look at each project individually given the disparity between cloud providers and the disparity between the difference services they provide.

Review the safeguards that are implemented.

Encryption is key to your security. It is so important that some organizations encrypt prior to uploading, even when using a setup that encrypts data. You want to know about at-rest encryption as well as in-motion. You also want to have strong protections in place to defend yourself against security flaws within software and other threats. Specifically, be certain you have distributed denial of service (DDoS) protection, advised Greengard.

Manage vulnerability.

Attack simulation and patch simulation are the two key elements of vulnerability management, advised Tim Woods in Hacker Noon:

  • Simulating attacks – By designing and launching what an attack would look like and achieve, you bring together your security policies and controls with vulnerabilities. Seeing what happens with your current setup when faced with a vulnerability allows you to understand how malicious access could occur so you can prevent compromise.
  • Simulating patches – Simulating patching is about filtering and considering rather than simply applying patch after patch. “Patch simulation done effectively makes patching focused, targeted and strategic,” said Woods. Patch simulation is about checking various ways to solve problems by trying patches and analyzing if they optimize risk mitigation by broadly minimizing vulnerabilities.

Manage control changes.

Assess the controls that you have implemented. Many people wrongly think that they can manage cloud in the same manner as an onsite server if they have the cloud provider host their traditional firewall, said Woods – which does not address cloud-specific architecture.

Achieve continuous compliance.

Compliance is not simply about making sure you are meeting regulations. Set aside the anxieties and concerns of an audit. Focus on the very real security objectives and motivations of your organization.

If you use automation well and orchestrate your systems to work integrally, you can achieve continuous compliance. To meet this end, focus on centralizing security methods and technologies. You want your controls to be in a single location so that you can coherently make any changes and then use real-time benchmarking to assess them. You need to know immediately if you are compliant so you can act quickly if you are not.

Consider how cloud providers address security and privacy.

Look at your service agreements to see how your cloud provider presents its security foundation and responsibilities. Note that cloud providers will sometimes change service agreements, tweaking them in ways that can be detrimental to security or privacy, noted Greengard.

Anything that you do not understand related to procedures, policies, and security components is ideally discussed upfront. Of course, a strong provider should be able to explain its setup, commitments, and what it is specifically doing to protect you at any point.

Cloud security – preferable and about trust

Your cloud providers should adhere to the same or better security best practices than you have implemented at your own organization. In fact, the security provided by cloud providers is better than what is available at most on-premise data centers, as indicated by a 2017 poll of 300 IT professionals.

While the security of cloud systems may be preferable to those of in-house systems, you still need to follow best practices (i.e., the provider cannot handle everything – such as how you store passwords or configure their environments). Plus, you must find providers you can trust. At Total Server Solutions, we’re trusted by educational institutions, government agencies, financial institutions, and telecom firms to keep their data on-line and available. Your secure cloud starts here.

cloud security threats and defenses

Posted by & filed under List Posts.

Go to Cloud Security 101 – Part 2 – Key Defensive Strategies >>>

There are general changes in cloud computing that are of interest, but security is such a fundamental concern for online business that it deserves its own attention. This two-part Cloud Security 101 piece looks essentially at the problem and the solution, answering these questions:

  • What are Cloud Security Threats in 2019?
  • How is Cloud Security Changing in 2019?

Key Cloud Security Threats

Numerous threats must be addressed by cloud providers and cloud customers to create a secure environment. Some of the key ones are described below. 

Denial of service

Any service can be targeted with a DoS attack, which is an effort to prevent the legitimate users of a service from being able to access it. Typically DoS attacks are accomplished via distributed denial of service (DDoS). The attackers force the cloud provider to consume a huge volume of resources from a barrage of bogus requests; in turn, systems can become incredibly slow or be forced offline.

Lack of appropriate due diligence

Due diligence is critical for executives as they compare different cloud service providers (CSPs), noted the Cloud Security Alliance. If due diligence is insufficient, it introduces various risks to the organization.

The insider threat

One critical thing about security that is often forgotten is people. There is ultimately vulnerability related to individuals; in healthcare, which is most at-risk for this threat, more than half of breaches are caused by the insider, per one study. Building stronger technologies against external threats helps but does not protect against the threat of the insider. Incredibly, the hackers on the outside could be overcome by the threat posed within your staff (at least until that threat is properly addressed). 


Ransomware attacks are going to continue to increase, suggested Kanti S. in Analytics Insight. Companies should combine strategic human intelligence with machine learning algorithms to better respond to cyberattacks. 

Data loss

As noted by the Cloud Security Alliance, you could experience data loss due to a malicious attack. Customer data could be permanently lost in the event of an earthquake, fire, or other physical catastrophe at a cloud service provider. This threat is mitigated by checking providers for data recovery and business continuity best practices so data backup is enforced. 

Shadow IT

One aspect of cybersecurity that seemed to receive greater discussion in the past, prior to vast incorporation of cloud services and policies to address it, is shadow IT. Shadow IT continues to be a massive problem, though. In fact, more than 4 in 5 employees say that they use apps that are not approved for use – i.e., the vast majority of people in the workforce are using shadow IT.

The individuals who do use apps that fall outside the organizational umbrella are simply trying to perform their jobs better; they are not trying to be negligent or malicious. The other aspect is that everyone wants to be independent and does not want everything they do to be watched. However, that oversight, from an organizational perspective, is central to security and compliance. You cannot apply software updates, perform backups, monitor access logs, or do any other management if you are unaware of the rogue apps. If IT is charged with data compliance and security, what is not under the oversight of IT is a threat. 

Operational technology (OT) systems attacks

Business operations are threatened by attacks on critical infrastructure, mining operations, and manufacturing plants. These attacks, which are becoming more common, are a threat to the lives and health of the general public and employees. Notably, OT systems cannot be protected using the same means as are used for information technology systems, as indicated by a November report in Security Boulevard. Even with air-gapping, you need more than a standalone solution. 

Advanced persistent threats (APTs)

In order to steal data, sometimes cybercriminals will deploy cyberattacks that behave similarly to a parasite. They first breach your system and create a foothold. From there, they can start taking away your data. Once the APT has been installed in your system, it is able to mix itself in with typical traffic and make lateral moves within your networks. In this manner, an APT can achieve objectives of hackers over long periods of time, even evolving as they go in order to thwart the security methods that are being adapted to defeat them. 

Lack of compliance

One report found that at least one cloud storage service was publicly exposed in almost a third (32%) of companies. Many prominent breaches occur because these systems are improperly configured. A proper program and policies for governance and compliance can help you address the threat of risky cloud setups. Companies are starting to implement broader compliance throughout cloud to avoid this issue.

AI threats

To cloud as to other systems, AI is both friend and foe. Artificial intelligence has been on the rise lately, staving off the persistent idea that it will hit another AI winter, propelled by its applications in meeting business functions. AI is also growing because it is being utilized by cybercriminals. AI is important in developing the hackers’ evasion strategies, the moves they makes to sidestep your efforts at detection and expulsion. AI will likely be used by cybercriminals to analyze areas they have infiltrated prior to implementing later-stage attacks, as well as to automate their target selection.

Hackers already have various ways to evade detection, and AI will bolster their efforts. Two other ways that cybercriminals avoid detection are through cryptomining and botnets, noted StateTech editor Juliet Van Agenen.

As the threats of increasing sophistication allowed by AI are used by criminals, so must organizations utilize the advantages of next-generation cybersecurity tools (including ones that leverage AI) to defend themselves.

AI tools that monitor user behavior and devices, signaling when they find anything irregular, will be released during 2019 and refined in the years ahead.

Malicious or abusive use of cloud

Cyberattacks becomes more possible when payment application fraud, bogus account creations, free cloud trials, and poorly configured cloud servers all expose companies to vulnerability, noted the CSA. Cloud resources could be used in phishing efforts, email spam, or distributed denial of service (DDoS) attacks. These campaigns might be used to hit cloud rivals, other organizations, or users.

Account hijacking

Attackers might be able to listen to transactions and activities, change data, send back fraudulent data, and direct clients to bogus websites. The attacker can use a service or account as a new base for attack via account hijacking. They are able to get into critical areas, where they can proceed to sabotage the availability, integrity, and confidentiality of the environment.

(Part 2 continued below.)

Your strong cloud security partner

Vulnerability management is a key upside to the cloud; any companies that already have their workloads within the public cloud benefit from stronger vulnerability management that is built into those systems. Cloud service providers are focused on updating their infrastructure and other systemic elements regularly, so hosting workloads in cloud provides access to their updating process and protocols.

However, strong vulnerability management and breach prevention in cloud requires due diligence. Due diligence is about selecting the right cloud providers so that your security is protected even better than you could achieve internally. At Total Server Solutions, our SSAE 16 Type II audit demonstrates our compliance with the highest standard in data security. See our security commitment.

Go to Cloud Security 101 – Part 2 – Key Defensive Strategies >>>

Posted by & filed under List Posts.

The online retail space encompasses a vast array of eCommerce platforms and applications, each of which promises to provide your retail business with a path to eCommerce success.  When we need to be confident that an eCommerce store can provide everything a growing retailer needs to support their business now and into the future, Total Server Solutions chooses Magento

Why Magento? Put simply, it offers the best combination of functionality, flexibility, and support for online selling in the modern eCommerce market. We prefer Magento for many reasons, both large and small, but what follows are the most important.

Omnichannel – A Path Forward for eCommerce


Every eCommerce retailer should own and control their platform, but they shouldn’t have to sell only from that location. Maximizing sales means embracing other platforms, from social media to marketplaces like Amazon. In addition to being a powerful base of operations, Magento provides a huge range of integrations with other marketplaces and payment solutions. Magento makes it easier to build a true multi-channel retail business.

Magento is Scalable

Many retailers launched their eCommerce business on a hosted platform like Etsy, only to be forced to migrate when the platform’s limitations get in the way of growth and innovation. Magento is intuitive enough to work well for a small eCommerce startup, but it is capable of scaling to support eCommerce stores with thousands of products and hundreds of thousands of visits a day.

Magento is Open Source

Proprietary eCommerce solutions implement one vision of what an online store should look like and the capabilities it should have. Magento is designed for the general case, but it is easily modified by retailers who need something different. Because Magento is open source, retailers can shape their store to the needs of their business, rather than having to shape their business to the limitations of the platform.

Mobile Experience

Mobile eCommerce is the preferred way to shop in many parts of the world. Magento provides responsive themes and a responsive administration interface, giving shoppers a great experience on all their devices and allowing merchants to run their store from any device or location.

Dynamic Affiliates and Partners

Finally, and perhaps most importantly, the Magento Marketplace is a curated collection of extensions that retailers can use to find secure and trustworthy modules to enhance and modify Magento stores. Because Magento is open source and enormously popular, there is a huge range of extensions for merchants to choose from. The marketplace gives retailers the flexibility they need to sell any product and to market and manage any store.

Although there are many other eCommerce options, Magento remains the most powerful and flexible eCommerce application.

downtime in business how to calculate

Posted by & filed under List Posts.

In 1969, ARPANET – short for the Advanced Research Projects Agency – was released. This project, create by JCR Licklider, is considered by many to be the inception of cloud computing. That’s right: cloud, at least the basic precepts and architecture that form its basis, is 50 years old. Technologies are built off earlier ones, and they are also advancements at meeting the same core needs. One of the key ones that is considered valuable in a technology is its capacity to enhance reliability and reduce downtime.

The threat of downtime is vague until you have numbers, though. Calculating downtime can be achieved by using a basic formula that adds major costs – lost revenue, lost employee productivity, and money paid out directly to customers. You can then expand your understanding by incorporating downtime frequency over time.

Formula to calculate downtime

To calculate downtime, you can start with this basic formula:

Cost of downtime = Revenue + Employees + Customers

COD = R + E + C

Note that you may want to add an additional variable, M, miscellaneous. That category would include the cost for any consultants that you use for data recovery or related services. The below goes into more detail on each of these key variables.

Revenue loss

Note that this plan should be shared with your customers to the extent they are impacted. Here is how to determine how much revenue is lost to downtime:

  • Figure out and list the revenue-generating parts of your business.
  • Determine what the per-hour revenue is on average for each of them.
  • Determine the significance of IT downtime. A 100% ecommerce company might lose all revenue, while one that is half brick-and-mortar would be 50% impacted.
  • Add up all the lost revenue from each revenue-generating component.

Employee productivity loss

Your productivity is determined through a formula that reuses some of the variables from the above COD one but with different meanings, as follows:

Cost of downtime to employee productivity = number of impacted employees x percent to which the outage impacts them x per-hour cost of average employee x the number of hours of downtime

Productivity cost = E x % x C x H

Customer payouts

An important aspect of downtime is what you will need to pay customers. You need to pay out whatever is mandated in your service level agreement (SLA) contracts. Plus, include in that figure the costs of any apology money that you pay – such as when an airline gives out flight vouchers or hotel rooms.

Looking at downtime broadly: frequency

As indicated in the introduction, technology changes, but many of the core considerations with technology remain the same. Hence, the way to approach downtime is fundamentally the same as it has been for years, as indicated by this plan that is 15 years old and still relevant – incorporating frequency:

1.)  Determine what elements of your business you are considering related to downtime. Those might be any of four types: data, systems, property, or people. Often data and systems are considered.

2.)  Figure out what it is that you are safeguarding. You must be concerned first and foremost with protecting your core competencies. Your value in the marketplace is determined by those ultra-valuable core competencies.

3.)  Focus on business functions. You want to put approximately 80% of your attention on the 20% of your data and applications that are most critical. You might include email, customer-facing apps, and your primary operational platform.

4.)  Classify different lengths, frequencies, and types of downtime. You might have a failure related to a single data center, or could suffer a national, regional, or branch outage, for example.

5.)  Determine full costs of downtime. Those extend beyond the above to the damage to your brand, potential litigation, etc.

Considering all costs will get you to your hourly cost of downtime.

Then, finally, you are able to use this formula to figure out downtime over time:

Frequency x Duration x Hourly Cost = Total Cost

How to convey cost of downtime to management

Three steps can help when you want leadership at your organization to understand the critical value of uptime and why downtime is so important to avoid. First, establish the problem using broad data. Second, create a visual. Third, communicate your internal numbers to those who can invest in downtime prevention.

Know the industry numbers.

Establishing that downtime is an issue generally can help before you drill into specifics at your organization. The cost of downtime continues to rise over time as digital interactions grow more and more important to how businesses operate and bring in money. Andrew Lerner of Gartner noted in 2014 that the average downtime cost for a company was $5600 per minute, adding up to $336,000 per hour.

Really, though, there is far more diversity in downtime costs than that, noted a study cited in Channel Futures. Per-minute cost actually ranged from $137 to $17,244 among the firms studied in that analysis.

However, it is important to note that there is a range of different downtime amounts that would apply to different organizations. Size and industry greatly impact the amount. An hourly range from Avaya suggests typical hourly costs of $140,000 to $540,000.

Use a picture.

You need to translate your technical understanding of downtime into something that is easy to grasp for a nontechnical audience. Get away from the jargon by translating it into an image of how you understand the situation.

For example, as in this image posted in CIO, you have a wide area network (WAN) that is for a staff of 1000 working at 5 remote offices. You need to install a router at your home office where all the infrastructure for your operational business software resides, along with routers for each of the 5 remote locations.

If you lose a router, your productivity for that portion of your staff could plummet.

Convey your own numbers.

The other thing you want to do is to make your numbers known. Granularity can help immensely in this effort. You need to know the cost of a failure of 1 router.

Here is how you figure that out:

1.)  Decide, roughly (through HR data perhaps), how much an employee at a particular location makes on average per hour.

2.)  Determine what the amount is that people are being rendered unable to produce because of the downtime. That number is called the productivity impact factor and is a percentage.

3.)  Figure out what your productivity impact is through the following: Cost of downtime to productivity = Impacted employees x Productivity factor x average hourly salary.

Using the above example, the formula would work like this:

1000 employees involved x $20 per hour on average x 0.5 productivity impact (i.e. 50%) = $10,000 per hour

Since just the cost to your productivity is $10,000, and if you estimate the cost of a backup router installation at $8000, you can make a case for putting in that router with under an hour of downtime.

Uptime and high performance for your organization

Are you in need of a high-performance system so that your company does not suffer downtime? At Total Server Solutions, our infrastructure is so comprehensive and robust that many other top-tier providers rely on our network to keep them up and running. See our platform.

speeding up your WordPress - how to optimize performance

Posted by & filed under List Posts.

You want to get your WordPress speed as fast as possible, for a few key reasons:

  • Ecommerce shoppers can be incredibly impatient when it comes to page load time. In fact, the average attention span dropped from 12 seconds to 7 seconds between 2000 and 2016. Since performance is so important, focusing on that aspect of your business will yield more sales.
  • The extent to which people demand a fast load time is backed up by studies. Research has shown that 47% of people who visit a website will bounce away if the site fails to load in just 2 seconds.
  • While we know that people prefer faster sites since they can get what they need faster, the search engines reflect that same perspective: Google and other search engines use speed as a ranking factor.

There is an incredibly small amount of time to show anyone who visits your site what you have and try to get them interested in buying. If your site is not performing at a fast clip, you will spend that tiny piece of time the user is giving you failing to get them what they want. Research from Strange Loop found that there is a 16% drop in satisfaction, 11% fewer page views, and 7% reduction in conversion when page load time is slowed down by just 1 second.

Check your site speed.

To get a sense of how your site performs in different areas, you can use an online tool such as Pingdom to test from different geographical regions.

Streamline and optimize plugins.

You do not want to have any plugins active on your site that are not necessary and holding up their weight. You want to minimize them, and you can sometimes determine the best ones by deactivating them one at a time and checking server speed.

To optimize your plugins, you can check for any inefficiencies in the code. You specifically want to look for any calls to the database that are not needed. “WordPress has its own caching system, so generally speaking, using functions like get_option(), update_option() and so on will be faster than writing SQL,” noted the WordPress Codex.

Use a WordPress caching plugin. 

WordPress sites run a process that looks for necessary information, assembles it, and shows it to your customers – building in that dynamic way each time the site is accessed. Each time someone goes to your site, the server will gather information from your PHP files and MySQL database, compiling it into HTML content to be presented to the visitor.

That retrieval and building can lead to poor performance if there are many people using your site at the same time. Using a caching plugin, just by itself, can make your site 2 to 5 times as fast, per WPBeginner.

Instead of building the site from scratch each time a new user visits, the plugin copies the page when it loads for the first person, sending the cached copy to anyone else who visits. 

Decrease the size of CSS and JS files.

You can speed up how fast your site loads pages by minimizing size of JS and CSS files, as well as by getting the number of their server calls as low as possible. Minification of JS and CSS files is one of the recommendations within Google PageSpeed Insights.

You can manually speed up your site by going through the themes, or use plugins. The one suggested by CodeinWP is Autoptimize. 

Shrink images. 

You want to get the image size down as much as possible without losing any quality. PhotoShop, a Chrome PageSpeed Insights extension, and other software can be slow in reducing image size. Plugins that specialize in this task are preferable. EWWW Image Optimizer, WP Smush, and Optimole are all prominent, well-rated options. Any of those plugins will help you significantly get down the size of your images and, in turn, accelerate your performance. 

Keep your site updated. 

WordPress is updated often. Those updates include security patches, new features, and bug fixes. The plugins and themes you use should have new versions released on a routine basis too.

If your site does not have the newest versions of the core code, theme, and plugins, you will be more at-risk for security issues and will also be hurt by poor speed. For that reason, you want to make sure that the most recent versions of all your WordPress site elements are installed. 

Optimize your themes. 

You can end up with a massive amount of PHP within WordPress – because every time a page loads, a large chunk of code has to be parsed via PHP.

Particularly within shared hosting settings, you can have bad speed on your site because of this parsing process. OPcode caching can be helpful for this aspect. You do not need to reparse the PHP nearly as much because your PHP content is set aside within a cache for temporary holding.

OPcode caching has been provided in the past via third-parties, the most popular of which was Zend. When PHP 5.5 was released, Zend open sourced its code and gave it to the PHP project, so it is now standardly part of PHP.

OPcache is supported by PHP 5.5 and later. Keep your PHP version updated. Newer releases will have additional options for this feature.

Optimize themes.

Themes can put as much as 3 times as much load on the server due to excess and unoptimized database queries – so that is a key point. However, total file characteristics and image files also must be addressed.

  • Practice query optimization and minimization. You may want to hardcode static values into your theme. That can reduce queries. The downside is that you could need to edit the code each time you change it. Your site title and site charset are examples of ones that you could hardcode. You can do the same with menus so that your site does not run wp_list_pages() or similar functions.
  • Get the size and number of your total files down. Minify your JS and CSS files. Create single, optimized files out of various CSS files. Get the number of files that you use to display the average page on your site as small as possible. You can use plugins to aid your efforts.
  • Reduce image files. Check for any images that you do not need, that could perhaps be replaced by text. Check that the images are in the best format for the image type, and that they are optimized. You can also benefit from plugins such as WP

Improve hosting.

As noted in CodeinWP, your first consideration when it comes to trying to speed up your WordPress performance is the infrastructure, which means improving your hosting. Shared hosting is the most inexpensive option, so it is understandable that smaller businesses turn toward it upfront. Shared hosting will not provide strong performance during peak hours, though. There are too many other sites that are tapping into the same pool of resources that you are.  Within a cloud environment, you can set up virtual servers that allow you to maintain strong performance, with better demarcation between the different accounts.

Do you want to optimize the speed of your WordPress site? It is critical to your results. At Total Server Solutions, our cloud hosting boasts the highest levels of performance in the industry. Build your WordPress cloud now.

web development trends for 2019

Posted by & filed under List Posts.

Three-part series (linked as they go live):

Online Growth Trends 2019: Ecommerce

Online Growth Trends 2019: Cloud

Online Growth Trends 2019: Web Development


Forecasters are often criticized because they simply cannot always be accurate – since there is ultimately some guesswork involved. Meteorologists are a great example of a group that is maligned for being inaccurate, when in fact they are incredibly accurate on the whole: SciJinks, from the National Oceanic and Atmospheric Administration (NOAA), noted that 10-day forecasts are approximately 80% accurate, while 7-day forecasts are about 90% accurate. Similarly, predicting trends for 2019 is relatively accurate when sourcing reliably and considering that no increased use of a technology is within a vacuum but is in relationship with other trends. Hence, most of these 2019 trends should hold true despite the fact that we are making prognostications at the beginning of the year.

Increasing need for adaptability 

Since mobile has become such a critical part of the Internet (representing 63% of users according to a 2017 analysis), it is necessary to meet the needs of mobile users as a first priority. Adaptive websites and applications are becoming more common in this climate. Services are easily accessible and can be web-connected immediately in these highly scalable environments. Often apps will be built to operate offline, with the ability to connect and transfer data as needed. Visual web development – i.e., web design – is going through seismic changes too. Adaptability is by no means built into all systems at this point. It continues to become a more crucial guiding principle in 2019.

New coding in JavaScript and PHP7  

JavaScript is often used by developers in their own work, noted Tarun Nagar. While the language has many aspects that are imperfect, it is developing all the time and continues to be used by most organizations worldwide. Turning to PHP, to say that it is popular is a huge undersell; it is actually used on 4 out of every 5 websites (80%), per Linux systems administration journalist Hayden James.  The release of PHP7 has also caused sea changes within development. “Possibility to group import ads as well as Engine exceptions along with anonymous classes is duly added,” said Nagar. Another significant aspect of PHP7 is that it introduced the Unicode Codepoint Escape Syntax.

Website performance has benefited from the improved performance that PHP allows. It has allowed for the huge trends in blockchain systems; P2P trading and exchanges; and the cryptocurrency industry. While organizations have started to realize the many powerful applications of blockchain, cryptocurrency was the first; in order to create cryptocurrency, web developers have to understand the technology completely. Organizations that develop cryptocurrency should also be careful with their user agreements given the risks of the field.

Web design shifts online

The programming language used to be determined by individual coders until HTML5 made JavaScript not restricted to web but a nearly ubiquitous development language.

There are many ways that you can customize your approach to JavaScript by utilizing different JavaScript frameworks. While having various frameworks is not completely aligned with standardization, it is possible to transfer out the basic notions used within a certain framework to another setting, as indicated by Carl Bergenhem. “This shifts the focus to better programming habits and architecture of web applications, rather than being akin to picking your favourite flavour of ice cream,” said Bergenhem.

Because native mobile and web applications are used through the same codebase within Native, NativeScript, and other frameworks, those frameworks will help to draw additional coders to web technologies.

Another reason that coders will increasingly turn to the web is Web Assembly. Coding languages such as Rust, C#, and C++ are able to tap into the web because of this feature. More languages will be able to use the web due to projects such as Blazor, which uses .NET within the web. The language that is used by a developer will cease to matter for web – which means, essentially, that all developers are now web developers.

Progressive web applications (PWAs) will further the abmiguity of the delineation between web apps and native mobile apps. As there is less of a distinct choice that must be made in that regard, coders will be freed not to have to worry about the platform decision and to place their priorities squarely on user experience.

Personalization of AI

The applications of emergent technologies are often discussed in terms of their sexiest representatives. Hence we discuss automation in terms of the driverless car. However, the power of a less glamorous technology such as artificial intelligence (AI) cannot be ignored; and there are plenty of exciting applications for it too, diverse environments in which it will be deployed. This issue is best understood in context. Analytics has traditionally been about logging data and using it for the next version – which is reactive. In 2019, analytics will reach another tier of development with the implementation of machine learning. In this new world, data on how your app is being used will be gathered and used to determine how the site should evolve, furthering the quality of the user experience immediately. This approach is proactive.

What this agility of the app will permit is the capability to deliver UX that is most suitable to the particular person – assuming that there is sufficient data available for the user. This chameleonic aspect will allow you to produce a personalized website that can check the user and then present different tools and functionalities.

Progressive web apps 

Discussed briefly above, PWAs are becoming more prominent based on sharpening understanding of user behavior. These apps are built in a manner than is intended to improve retention and sales by making it easier for individual people to use it. HTML, JavaScript, and CSS are some of the core technologies that allow for progressive web apps.

Again, this idea of adaptability being key to web development is huge in 2019. There is independent updating of the apps, allowing for autonomy. PWAs are also notable because any kind of native user settings or mobile devices will allow for full functionality of the app. The Service Worker API is typically used for automatic updating of PWAs. HTTPS protocol is used to protect data via encryption.

Web accessibility

Web accessibility is an issue that technologists probably too little discuss. The truth is that this need, to practice inclusivity in ensuring those with disabilities do not have difficulty using your site, is becoming more central in 2019. Bergenhem noted that accessibility will continue to become more important to development whether because of governmental regulations or from developers turning more to methods that are inherently more accessible. “Accessibility is essential for developers and organizations that want to create high quality websites and web tools, and not exclude people from using their products and services,” explained the World Wide Web Consortium (W3C).

High-performance infrastructure to accelerate development

Staying abreast of trends in web development can help you to sharpen your skills in the most important areas as time passes. Of course, web development is not just about incorporating approaches into the way you develop but building and operating through fast, reliable infrastructure. At Total Server Solutions, we have brought together some of the best, most high-performance technologies and packaged them to be used together. See our true hosting platform.

cloud computing trends

Posted by & filed under List Posts.

Three-part series (linked as they go live):

Online Growth Trends 2019: Ecommerce

Online Growth Trends 2019: Cloud

Online Growth Trends 2019: Web Development


A September 2018 Gartner report found that cloud computing would expand to $206.2 billion in 2019 at a 17.3 percent compound annual growth rate (CAGR). The bad news is that is a slight downturn in projected growth rate, with Gartner having forecast a 21% growth for 2018. Expansion is still rapid. Given that incredible general growth rate of cloud, the technology is a trend in and of itself. It is becoming so ubiquitous, it is increasingly worthwhile to consider how the field is changing and what that might mean it terms of opportunities for business and organizations.

Top trends in cloud computing for 2019 include the following:

Serverless computing

One IT method that is becoming more prevalent is to sign up for a public cloud with a platform on it, with a fee paid to the cloud host for the platform – a tactic called serverless computing. This service, available through some hosting providers, allows you to use platform as a service (PaaS) via a container through a cloud host, which charges for the platform access. The host handles setup of the physical machines and configuration of the servers.

Serverless computing is attractive to organizations for the same reasons that cloud itself is – the ability to pay on-demand for services rather than having to make capital investments in costly machines and environments. Servers must be purchased, stored, and configured; all that can of course be avoided with serverless computing.

Service meshes

For multiclouds, the network management backplane that will be used will be service meshes such as LinkerD, Envoy, and Istio. The service meshes will allow companies to integrate their private and public cloud environments with on-premise containerized data. Hub-and-spoke and mesh systems will be increasingly used by cloud service providers to allow for easy integration and management of thousands of on-premise networks and virtual private clouds.

AI platforms

Artificially intelligent (AI) platforms are built to operate more smartly, and hence more optimally, than traditional systems. AI functionality is used within big data systems to develop stronger knowledge of how a business functions by enhancing ability to collect strong business data.

You can get work completed faster with AI installed, since it will ensure work is distributed evenly. When data governance standards are integrated, machine learning and AI engineers can be better managed to follow best practices through the platform.

An AI environment can also cut your expenses by helping you to automate some labor-expensive and/or simple tasks (e.g., data extraction and copying), as well as to avoid error duplication. Staff members and data scientists can work together to improve your efficiency and speed if your AI platform is well-designed.


Additional multicloud and hybrid cloud tools will become commercially available. In order to mitigate risk, control costs, and perform migrations quickly, organizations will increasingly want multicloud backplanes, migration tools, and professional services from their cloud providers – accelerating their development. As these functionalities becomes more widely available, transitioning to cloud-native backbones via lift-and-shift, whether for data, workloads, or applications, will grow, noted James Kobielus. More companies will be putting legacy workloads into containers, avoiding the need to rewrite the code. Doing that means that sophisticated migrations can occur without having to assume as much technical risk. Migration to IaaS and PaaS platforms from legacy, on-premise infrastructures will occur as it becomes increasingly affordable to do so.


Cloud-native development does not take care of the already developed on-premise apps, and lift-and-shift is not the only option for those existing systems. Refactoring will become a more broadly used practice too. Prior to designing their infrastructure for multicloud, organizations will often think about how to move workloads and refactor. In order to benefit from native cloud services, organizations will reprogram or refactor instead of using lift-and-shift as much in 2019, according to the analysis of Cloud Technology Partners technology evangelist Ed Featherston.

Shortage of cloud skills 

The cloud carries with it the need for highly specialized skills that are costly, very much needed, and not easy to find. Since that’s the case, the transition to cloud could make the issue with staffing shortages that has been with IT for some time even worse.

According to a report featured in ITProToday, the cloud skills gap is so critical and so substantial, it costs the average large enterprise a quarter of a billion dollars ($258 million) annually. That amounts to 5% of their annual global revenue, on average.

Orphaned resources 

Cloud is easy to adopt, but it can lead to waste. A recent report by RightScale found that 30% of cloud investment is wasted by the average cloud-using organization. People might spin up a cloud service and keep it running even if they do not use it. Cost optimization within cloud will continue to become a key point of focus, so that the wasted spending of orphaned resources can be avoided. 

Cloud data lakes, databases, and warehouses

The greatest challenge for business intelligence and data warehousing has been answered by cloud data stores. Self-service platforms have typically been unreliable, noted an article in AI Business, while clunky schema configurations and slow relational methods in traditional architectures have damaged business access. The Internet of Things, artificial intelligence, and other technologies can benefit from the scalability of cloud data stores – as well as the fact that there is direct access to analytics tools. 

Internet of Everything

Often we talk about the Internet of Things (IoT) in terms of the new world in which virtually everything around us becomes an endpoint of the Web. However, that discussion is often referring to a broader concept, the Internet of Everything (IoE), which goes beyond connected things to also include data, process, and people – as indicated by Angela Karl. “IoE works to provide an end-to-end ecosystem of connectivity that consists of ‘technologies, processes, and concepts employed across all connectivity use-cases,’” wrote Karl, quoting Cisco.

The IoE utilizes data, processes, and machine-to-machine communication in order to learn about how people interact with their environments. A good example of its use is hospitality robots in Japan. The intelligent robots can blink, breathe, make hand gestures, and otherwise behave as humans do. They can speak in Japanese as well as fluent Chinese, Korean, and English. They say hi to guests, interact in real-time, and provide simple services.

Hybrid cloud

Hybrid clouds combine the two models of cloud, public and private. Dataversity forecast that the benefits of hybrid cloud would eventually make it the chief model for cloud.

The obvious upside of hybrid cloud is that it increases your flexibility. The downside is how this model increases complexity. Per the NIST definition of hybrid cloud, it is the combination of various IaaS types. That will often mean blending public cloud and private cloud infrastructures. It can also mean combining public cloud, community cloud, and/or private cloud.

Your cloud partner for 2019

Are you creating a cloud environment so that your organization can benefit from this technology as effectively as possible? Cloud is not just about what you do yourself but about having the right partners you can trust to deliver secure and reliable services. Like the Internet of Everything, that means not forgetting people. At Total Server Solutions, we maintain an around-the-clock staff of experts. Our people make all the difference.