DDoS attacks

Posted by & filed under List Posts.

Distributed denial of service (DDoS) is one of the biggest security threats facing the Internet. We can develop a false sense of security when we see the major takedowns of individuals such as Austin Thompson – aka DerpTrolling – and Mirai botnet operator Paras Jha. (Jha was recently sentenced, and Thompson just pleaded guilty.)

Despite these high-profile busts, DDoS goes on. An industry report that looked at Q2 2018 showed an increase in average attack size year-over-year of 543% and increase in the quantity of attacks by 29% – consistent with our internal data as a DDoS mitigation provider. Attacks are becoming more sophisticated but have traditionally fallen into three primary categories:

  • Application layer attacks – These DDoS events, measured in requests per second (rps), involve an attacker trying to take the web server offline.
  • Protocol attacks – In these DDoS incidents, which are gauged by the number of packets per second (PPS), the hacker attempts to eat up all the resources of the server.
  • Volume-based attacks – When DDoS is targeting volume, measured in bits per second (BPS), the hacker attempts to overload a website’s bandwidth.

Two of the biggest names in Mirai have been DerpTrolling and Mirai. DerpTrolling was an individual who used DDoS tools to bring down major companies including Microsoft, Sony, and EA. Mirai is an IoT botnet that was created primarily from CCTV cameras and was used against the major DNS provider Dyn and various other targets. These two prominent DDoS “brands,” if you will, were first seen in the news as the attacks were occurring, as well as in their aftermath as alleged parties behind the attacks were arrested and ordered into court. This article looks Mirai and DerpTrolling, then explores what the landscape looks like moving forward.

The story of Mirai

A great business model from a profit perspective (though incredibly nefarious, of course) is to continually create a problem that you can continually resolve with your solution. That model was leveraged by Mirai botnet creator Paras Jha, who was a student at Rutgers University when the attacks occurred. Jha started experimenting by hitting Rutgers with DDoS at key times of year, such as midterm exams and class registration – simultaneously attempting to sell DDoS mitigation services to the school. Jha also was active in Minecraft and attacked rival servers.

On September 19, 2016, the first major assault from Mirai hit French web host OVH. Several days afterward, the code to Mirai was posted on a hacking forum by the user Anna-Senpai. Open-sourcing code in this manner is used to broaden attacks and conceal the original creator.

On October 12, another attack leveraging Mirai was launched – this one by another party. That attack, which assaulted DNS provider Dyn, is thought to have been an attack on Microsoft servers used for gaming. When Jha and his partners, Josiah White and Dalton Norman, pleaded guilty to Mirai incidents in December 2017, the code had already been delivered to the hands of other nefarious parties for use by anyone wanting a botnet to pummel their competition or other targets.

The story of DerpTrolling

DerpTrolling was a series of attacks on gaming servers. Thompson, the primary figure, hit various targets in 2013 and 2014. The scale of victims was broader than with Mirai: Thompson hit major companies such as Microsoft, Sony, and EA, along with small Twitch streamers.

DerpTrolling operated as @DerpTrolling on Twitter and would announce that he was going to hit a certain victim with his “Gaben Laser Beam.” Once the DDoS was underway, DerpTrolling would either post taunts or screenshots of the attack.

DDoS in court

On October 26, 2018, 22-year-old Jha received a sentence for the 2016 attacks he made using DDoS via Mirai. The punishment is $8.6 million and six months of home incarceration. The sentence was massively reduced by cooperation with the federal authorities and help bringing down other botnet operators.

Thompson pleaded guilty in federal court in San Diego to conducting the DerpTrolling attacks. Now 23 years old, Thompson is facing up to 10 years in prison, 3 years of supervised release, and $250,000 in fines. Sentencing will start on March 1, 2019.

The continuing threat

Mirai is problematic because the source code was released. Because of that release of Mirai into the wild, anyone can potentially come along, adapt it, and use it to attack the many IoT devices that remain unsecured and vulnerable.

Research published in August 2017 noted that 15,194 attacks had already been logged based on the open sourcing of the Mirai code. Three Dutch banks and a government agency were targeted with a Mirai variant in January, for instance. Rabobank, ING Bank, and ABN Amro were all hit with the wave – over a span of four days, these targets were each attacked twice. This incident underscores the different motives of cybercriminals: coming just a few days following news that the Dutch intelligence community had first alerted the US that Russian operatives had infiltrated the Democratic National Committee and taken emails, these attacks were likely political hacktivism (although potentially state-sponsored).

While Mirai was a massive problem that truly threatened core Internet infrastructure, DerpTrolling is more microcosmic but nonetheless critical in terms of perception. DerpTrolling, at least to some folks, made DDoS fun, silly, and off-handed. His run through the legal system sends a message to the individual gamer and anyone wanting to perform what they may see as mischief online could end up with an ankle-bracelet or even behind bars. Currently, one of the top searched questions related to DDos is, “Is it illegal to DDoS?” To anyone unsure on the issue, it is becoming abundantly clear that it is a criminal activity taken very seriously by the federal government in the United States and elsewhere.

Setting aside the specific cases of the Mirai and DerpTrolling attacks, DDoS is generally continuing to become a more significant threat to the Internet all the time. Another industry study, released in January, found that 1 in 10 companies said they had experienced a DDoS in 2017 that resulted in more than $100,000 in damages – representing a fivefold increase over prior years. Meanwhile, there was a 60% rise in events that led to downtime per-second losses of $501 to $1000. The research also showed a rise of 20% in multi-vector attacks – which is also consistent with our data.

These figures are compelling when you consider DDoS mitigation services from a strict cost perspective; plus, it is possible many organizations are underestimating the long-term impact in trust (leading to loss of customers) and brand value that stems from DDoS downtime. Furthermore, the issue of increasing complexity is interesting related to the expertise in quickly stopping events that are not as simple as these attacks have typically been in the past.

The multi-vector approach is just the tip of the iceberg, though, with the rise of artificially intelligent DDoS. Artificial intelligence is massively on the rise now. This technology’s strengths for business are often heralded, but it will also be used by the dark side. The issue with AI-strengthened DDoS is that it is adaptive. AI is always improving its approach, noted Matt Conran, “changing parameters and signatures automatically in response to the defense without any human interaction.”

Future-proofing yourself against DDoS

While the Mirai and DerpTrolling takedowns are major events in the fight against DDoS, industry analyses reveal the problem is still only growing. Preparing for the DDoS future is particularly challenging given the rise of multi-vector attacks and incorporation of AI. At Total Server Solutions, our mitigation & protection solutions help you stay ahead of attackers. We want to protect you.

Green indicator in address bar -- EV

Posted by & filed under List Posts.

While a secure sockets layer (SSL) certificate may seem to be a piece of paper, it is actually a file connecting its holder with a public key that allows for cryptographic data exchange. Recognized industry-wide as a standard security component, SSL use is also a ranking factor that assists with search engine optimization (SEO). The core function of an SSL cert, though, is to provide encryption to site pages for which they are configured, populate the https protocol, and introduce lock icons in browsers to indicate secure connections. Certificates can be validated to various degrees – and this validation provides a completely different, administrative layer of security to complement the technical security.

Certification authorities & SSL validation categories

A certification authority (CA), also called a certificate authority, grants applications for these certificates. A CA is an organization that has been authorized to issue SSL certificates. In their issuance of SSL certificates to allow for authentication of information delivered from web browsers to servers and vice versa, CAs are core to the public key infrastructure (PKI) of the Internet.

The three basic types of SSL certificates from a validation perspective are domain validation (DV), organization validation (OV), and extended validation (EV). This article outlines the basic, core differences between the three validation levels in brief and then further addresses the parameters of each level.

Nutshell differences between DV, OV & EV

While the types of validation that you can get for a certificate vary, the technology is fundamentally equivalent, following the same encryption standards. While the various SSL validation types represent the same technology, the validation that ensures legitimacy of the certificate varies hugely between the three:

  • Domain Validation SSL – You can get these DV certificates very rapidly, partly because you do not have to send the CA any documentation. The CA from which you order the certificate simply needs to verify that the domain is legitimate and that you are its legitimate owner. While the only function of a DV cert is to secure the transmission of data between the web server and browser, and while anyone can get one, they do help prove to your visitors that you are the site you claim to be while also building trust.
  • Organization Validation SSL certificates – A step up from domain validation is the OV certificate, which goes beyond the basic encryption to give you stronger trust about the organization that controls the site. The OV cert makes it necessary to confirm the owner of the domain, as well as to validate certain information about the organization. In this way, the OV cert provides stronger security that you can get with a DV cert.
  • Extended Validation SSL certificates – The highest level of validation and most expensive of SSL is the EV certificate. The browsers acknowledge the credibility of an EV cert and use it to create a green indicator in the address bar. You cannot be granted or install an EV certificate until you have been extensively assessed by the CA. The EV cert has a similar focus to the OV cert, but the checking of the company and domain is much more rigorous. To successfully apply for an EV certificate, you must submit to a robust validation procedure that verifies the genuineness of your organization and site thoroughly prior to issue.

In the case of a compromise, your insurance payout will also generally be higher for an EV certificate than for OV and DV, since there is better security baked into the EV process (rendering a compromise less likely).

Domain Validation – affordable yet less trusted

The DV certificate is the most popular certificate, so it deserves our attention first as we consider strengths and weaknesses of this low-end certificate.

Pros:

  • You can get one very quickly. You do not need to give the CA any additional paperwork in order to confirm your legitimacy. It typically only takes a few minutes to get one.
  • The DV certificate is very inexpensive. They are typically issued through an automated system, so you do not have to pay as much for one.

Cons:

  • The DV certificate is less secure than certs with higher validation levels since you are not submitting to any real identity validation. The ease exposes you to potential fraud: an attacker could conceal who they are and still get issued a DV cert – regardless if they poison your DNS servers.
  • When a DV certificate is installed, since there is no effort to vet the company, you are less likely to establish trust with those who visit your site.
  • Since DV certificates do not yield as much trust, people who use your site might not feel inclined to give you their payment data.

Organization Validation – beyond the domain check 

While a DV certificate simply connects a domain and owner, that quick-and-dirty issuance process does nothing to check that the owner is a valid organization. OV is a step up by ensuring that the domain is operated by an organization that is officially established in a certain jurisdiction. While these certificates also issue relatively quickly, you do need to go a bit beyond the simple signup process to get a DV cert since you must do more to prove the correct identity of your firm.

The certificate will present your company details in the certificate, listing your company’s name; fully qualified domain name (FQDN); nation; state or province; and city.  

Extended Validation – premium assurance

The Extended Validation certificate, as its name suggests, involves much more rigorous checking to confirm the legitimacy of the organization, in turn providing a significantly better browser indication that the domain can be trusted. You will need to wait to get an EV in place (in which case you could use a rapid-issue DV certificate initially and then replace with an EV certificate once validated).

EV is bound by parameters determined by the Certification Authority Browser Forum (CA/Browser Forum), a voluntary association of root certificate issuers (consortium members that provide certificates issued to lower-authority CAs); certificate issuers (organizations that directly validate applicants and issue certificates); and certificate consumers (CA/B Forum member organizations that develops browsers and other software that use certificates for public assurance).

In order to provide the greatest possible confidence that a site is operated by a legitimate company, an EV SSL verifies and displays the organization that owns the site via inclusion of the name; physical address; registration or incorporation number; and jurisdiction of registration or incorporation.

By making validation of the company more robust, users of EV SSL are able to combat identity thieves in various ways:

  • Bolster the ability to prevent acts of online fraud such as phishing that can occur via bogus SSL certificates;
  • Offer a method to help organizations that could be targeted by identity thieves strengthen their ability to prove their identity to site visitors; and
  • Help police and other agencies as they attempt to determine who is behind fraud and, as necessary, enforce applicable laws.

The clearest way that EV is indicated is through a green address bar. This visual cue of the security and trust level of a site signals to consumers who may know nothing about SSL certificates that the browser they are using approves of the site.

Maintain the trust you need

Do you need to keep your transactions and communications secure, whether for ecommerce, to protect a login page, or to improve your search engine presence? At Total Server Solutions, our SSL certificates are a great way to show your customers that you put security first. See our SSL certificate options.

value of big data

Posted by & filed under List Posts.

Why does big data matter, in a general sense? It gives you a more comprehensive view. It enables you to operate more intelligently and drive better results with your resources by improving your decision-making and getting a stronger grasp of customers and employees alike.

Big data may simply seem to be a way to build revenue (since it allows you to better zero in on customer needs), but its use is much broader – with one key application now being cybersecurity. Big data analytics allow you to determine your core risks, pointing to the compromises that are likeliest to occur.

Big data is not some kind of optional add-on but a vital component of the modern enterprise. Through details on where attackers are located and incorporation of cognitive computing, this technology helps you properly safeguard your systems.

Ways data is valuable

There are various ways in which data has value to business:

Automation. Consider the very real and calculable value of task automation. AI, robotic process automation (RPA), chatbots, and similar technologies allow for automation of repetitive chores. When you consider the value of automation, you are thinking in terms of how much it is worth to have that person working on other, more complex tasks as they are freed by the automation.

Direct value. You want to get value out of your data directly. Deloitte managing director David Schatsky noted that you want to consider key questions such as the amount of data you have, the extent to which you can access it, and whether you will be able to use it for your intended purposes. You can simply look at how data is being priced by your competitors to get a ballpark sense of direct value. However, you may need to conduct a fair amount of testing yourself to figure out what the true market value really is. Don’t worry if this process does not come naturally. An organization that is digitally native will be likelier to prioritize its data and know how much value it has for them; after all, they are fundamentally focused on using data to grow their businesses.

Risk-of-loss value. Think about information the same way you think about losing a good friend or important business contact. In many cases, we only appreciate what we have when it’s gone, but you have much better foresight if you consider your data’s risk-of-loss value – the economic toll it would bring if you could not access or use it. Similarly put a dollar amount on the value to your organization of data not being corrupted or stolen; i.e., how much should it really be worth to you to keep data integrity high and not undergo a breach? Think about a breach: you could have to deal with lawsuits, fines from government agencies, and lost opportunity cost alongside actual cost. Also keep in mind that you could have a nightmare situation in which your costs exceed the amount of your cybersecurity insurance policy – so you think you are prepared but get blindsided by expenses nonetheless.

Algorithmic value. Data allows you to continually improve your algorithms. That creates value in the sense of identifying the most powerful user recommendations because we have all experienced system recommendations that were meaningful and ones that were not, so increasing relevance is critical. It is now considered a standard best practice that you can better upsell and cross-sell when you have integrated product recommendations for customers to add. A central concern with algorithms is your algorithmic value model. The data that you feed into it should be as extensive and accurate as possible; for example, you might have data on destruction from a natural disaster such as the flash flooding in Jakarta, Indonesia. You get a sense of economic damage via as thorough a data set as possible on damaged buildings and infrastructure – so the quality and scope of your data set will determine how good the algorithm is.

Why know data values?

You want to prioritize data. You want to understand the diverse ways it has value. It also helps to understand exactly how valuable certain data is to you. Data valuation – sound, accurate valuation – is critical for three primary reasons: 

Easier mergers & acquisitions – When mergers and acquisitions occur, the stockholders may lose out if the valuation of data assets is incorrect. Data valuations can help to bolster shareholder communication and transparency while allowing for stronger terms negotiation during bankruptcies, M&As, and initial public offerings. For instance, an organization that does not understand how much its data is worth will not understand how much a potential buyer could benefit from it. Part of what creates confusion related to data valuation is that you cannot capitalize data per generally accepted accounting practices (GAAP). Since that is the case, there is great disparity between the market value and book value of organizations.

Better direct monetization efforts – As indicated above, direct value is an obvious point of focus. You can make data more valuable to your organization by either marketing data products or selling data to outside organizations. If you do not understand how much your information is worth, you will not know what to charge for it. Part of what is compelling to companies considering this direction is that you can garner substantial earnings from indirect monetization. Firms remain skeptical about sharing data with outside parties regardless the potential benefits, since there are privacy, security and compliance issues involved.

Deeper internal investment knowledge – Understanding the value of your various forms of data will allow you to better figure out where to put your money and to focus your strategy. It is often challenging for firms to figure out how to frame their IT costs in terms of business value (which is really necessary to justify cost), and that is particularly true with data systems. In fact, polls show that among data warehousing projects, only 30% to 50% create value. You will get a stronger sense of areas that could use greater expenditure and places of potential savings when you have a firm grasp of the relationship between your data and business value.

You can greatly enhance the relationship between business and IT leadership by learning how to properly communicate the value of data. The insight into data value that you glean from assessing it will lead to CFOs being willing to invest additional money, which in turn can produce more positive results.

Steps to better big data management

Strategies to improve your approach to big data management and analysis can include the following:

Step 1 – Focus on improving your retail operations.

The ways that shoppers will behave, which in turn tells you roughly how they will act on a site, is being bolstered through innovations in machine learning, AI, and data science. Retailers benefit from this data because it helps them determine what products they must have in stock to keep their sales high and their returns low. It also helps to guide advertising campaigns and promotions. In these ways, sharpening your data management practices can lead to greater business value.

Step 2 – Find and select unified platforms.

You want to be able to interpret and integrate your data as meaningfully as possible. You want environments that can draw on many diverse sources, gathering information from many types of systems, of different formats, and from different periods of time, brining it all together into a coherent whole. Only by understanding all of the data at your disposal holistically and as part of this fabric can you leverage true real-time insight. You should also have sophisticated enough capabilities to partition data as you go into what you need and don’t for certain applications, with baked-in agility.

Step 3 – Move away from reliance on the physical environment.

Better data management is also about moving away from scenarios in which data is printed and evaluated as hard copies. IT leadership can instead use an automation platform to send out reports to all authorized people, and then allows you to view the reports there.

Step 4 – Empower yourself with business analytics.

You will only realize the promise of big data and see competitive gains from it if you are getting the best possible numbers from business analytics engines. For scenarios in which you are analyzing batch data and real-time data concurrently, you want to be blending together big data with complementary technologies such as AI, machine learning, real-time analytics, and predictive analytics. You can truly leverage the value of your incoming data through real-time analysis, allowing you to make key decisions on business processes (including transactions).

Step 5 – Find ways to get rid of bottlenecks.

You really want simplicity, because if a data management process has too many parts, you will likelier experience delays. One company that realized the value of its big data, highlighted in a story in Big Data Made Simple, is the top Iraq telecomm firm, AsiaCell. When AsiaCell started paying more attention to big data management, they realized that they sometimes unnecessarily copied data and frequently lost it because they did not have processes that were established and defined.

Step 6 – Integrate cloud into your approach.

Cloud is relatively easy to deploy (without having to worry about setting up hardware, for instance), but you want to avoid common mistakes, and you need a plan. After all, you want to move rapidly for the greatest possible impact (rather than losing a lot of energy in the analysis as you consider transition and review providers). To achieve this end, many organizations are shifting huge amounts of their infrastructure to cloud, often doing so in conjunction with containerization tools such as Docker (for easier portability, etc.). Companies will often containerize in a cloud infrastructure, and then reference the apps with other ones inside the same ecosystem.

Deriving full value from your data

Incredibly, the report Big & Fast Data: The Rise of Insight-Driven Business said nearly two-thirds of IT and business executives (65%) believed they could not compete if they did not adopt big data solutions. Well, there you have it: more and more of us agree that big data is critical to success. That being true, we must then assume that taking the most refined and sophisticated approach possible to analysis is worthwhile. TSS Big Data consulting / analytics services allow you to efficiently harness, access, and analyze your vast amounts of data so you can take action quickly and intelligently. See our approach.

managed data protection for your IT systems

Posted by & filed under List Posts.

A 2017 report from IDC found that the data protection and recovery software market was not growing as fast as it had in previous years – dropping to a 2.1% compound annual growth rate (CAGR) from a 6.8% CAGR in 2016. There was much more impressive growth among some players though: smaller providers grew at 14.5%, while one vendor outdid that rate with 26.6% growth. That vendor? Veeam.

With Veeam, you would be utilizing a solution that has now been adopted by nearly three-quarters (74%) of Fortune 500 companies. This article looks at basics related to data protection itself and then lays out specific strengths of Veeam.

Data protection – the basics

To ensure that you do not have your data lost or stolen, and so that you maintain its integrity, data protection is critical. It is particularly important to concern yourself with safeguarding information since maintaining uptime allows people to get to their records without any hitches. The volume of information is also increasing, as is the understanding of its value – factors that further expand the need for data protection.

Business continuity/disaster recovery (BC/DR) and operational data backup both fall under the heading of data protection. Building these defenses into your business is not just important but necessary. Maintaining privacy of information and preventing data breaches are critical to safeguarding information. Also, setting up stringent safeguards for data serves an important function since maintaining availability is so important to organizations. Notably, hyper-availability is the centerpiece of the Veeam approach.

Two directions for data protection

The two key ways in which data protection is developing are data management and data availability. A complete plan that is used to protect information from hardware failure, disruptions or outages, malicious incidents, or user and software errors, called information lifecycle management, is a primary practice within data management. A simpler concern is data lifecycle management, through which the transmission of information to storage is automated. Data management is also used to get as much value as possible out of data via analytics, reporting, and testing or development.

Data availability is key because, regardless if you lose data or it becomes damaged, you will still be able to get users the correct data seamlessly. Along with robust data management, Veeam managed data protection offers hyper-availability so that your information is always at your fingertips – ready to propel insights and innovations.

Reasons data protection is so important

For various reasons, you need to protect your data:

  • Compliance – You need to comply with regulations in order to avoid fines, lawsuits, and other expenses that arise from breaches. One of the key ones for any organizations that handle user information online is the General Data Protection Regulation (GDPR) from the European Union (EU), which just went into effect in May. Violations of the GDPR can lead to fines up to 4% of the previous-year global revenue of an organization. That is just one example of regulatory compliance – and costs extend far beyond the fines to forensics, lawsuits, and other expenses.
  • Security – You want to be certain that all your data is accurate. When customers or staff enter information, you must verify that there are no mistakes or otherwise incorrect data. In order to make sure that fraud is not occurring, your systems and services should confirm that contact details, banking account numbers, and other information is accurate and not being used illicitly. Bank accounts can be compromised, for instance.
  • Best practices – Beyond concerns with compliance and security, you want to know that your information is only used in the manner that you expect it to be used – only in ways that are relevant and defined. Data should not be kept on-hand any longer than is needed, and during that time it should be kept safe. These best practices apply especially when you are marketing, changing staff records, or onboarding new staff members.

Of course some of your information is higher priority than other information is. You want to protect especially sensitive information so that it is not taken in identity theft, phishing, or other fraudulent efforts. Key pieces of data include full names, health data, credit card or bank information, phone numbers, emails, and addresses.

Cloud data protection

In order to safeguard your information that is at-rest or in-motion within cloud environments, you can leverage cloud data protection to utilize the best security and storage methods. You can meet a few core needs through this approach:

  • infrastructure security – To keep your storage and cloud servers secure, you need these policies and techniques.
  • storage management – Edit, copy, and access to data are logged through this feature. Data access is also enabled via an interface that is highly available and secure.
  • integrity – In order to keep information from being corrupted or altered by unauthorized parties, encryption is in place. Data maintains the same form it has in storage.

If you need managed data protection for your cloud ecosystem, you can achieve that through data protection as a service (DPaaS) – which is offered through a Veeam-powered Total Server Solutions plan.

Features of data protection

For secure storage, you can use tape or disk backup in order to copy data to a tape cartridge or disk-based storage. When alterations are made to data, you can leverage continuous data protection (CDP) to maintain safety. For speedier data recovery, you can more easily get to data that is on a disk or tape via creation of links within automatically created storage snapshots. You can also get an identical copy of files or a website via mirroring.

For strong data protection, backup has always been fundamental. Traditionally that involved backing up every night – or in some other defined regular interval – to a tape library or drive. These backups could then be tapped if data became damaged or lost. Today data backup has become much more sophisticated, seamless, and user-friendly.

Why Veeam?

Managed data protection with Veeam is centered on delivering hyper-availability. This hyper-availability is tricky because data environments have become so complex in recent years, with security safeguards a mandatory best practice as it flows through multi-cloud ecosystems. Defending your data is absolutely critical because of how important data has become to broadening the insights of organizations and allowing prediction of fluctuations in demand. With the data properly protected and uncorrupted, you are also able to glean from it all you can, innovating more rapidly and reducing time-to-market.

Beyond hyper-availability and the prominence within the Fortune 500 (see introduction), other reasons that Veeam managed data protection is a strong choice include the following:

  • Savings – The strength of Veeam is key because downtime is incredibly expensive: $5600 per minute, according to Gartner! Avoid those expenses, along with the reduction in staff confidence and brand value resulting from downtime, with a high-availability solution.
  • Simplicity – It is simple to deploy and use, particularly with a managed services provider such as Total Server Solutions.
  • Speed – If recovery is needed, you can leverage the industry’s fastest recovery time to get your apps, servers, and files back up and running.

Launching your Veeam managed data protection solution

Do you want to see how protecting your data through a managed Veeam solution can improve your business? At Total Server Solutions, Veeam fits with our general focus on information security, which includes an SSAE 16 Type II audit – proving our adherence to a standard designed by the American Institute of Certified Public Accountants. To learn more about Veeam managed data protection and our other security offerings, contact us today.

How Blockchain will impact ecommerce -- Bitcoin

Posted by & filed under List Posts.

A blockchain is a public electronic distributed ledger that contains data, originally designed for cryptocurrency transactions but increasingly used for other purposes. You do not need to perform any central bookkeeping with blockchain because the system expands as additional data blocks are logged and incorporated within the system, lined up in order of entry.

Stemming originally from the development of Bitcoin, the distributed ledger technology (DLT) of a blockchain is now used in numerous ways within business and other organizational settings. You can put just about any file or type of data within a blockchain, and doing so produces an immutable record. As an additional check, there is no central administrator of a blockchain system; rather, the whole community of users verifies the authenticity of the data.

Blockchain is set to have a profound impact on online sales. This article looks at a dozen key ways in which the technology will disrupt ecommerce:

Impact #1 – Information security

Data storage is a problematic issue for current ecommerce platforms. Retail companies and customers that have user accounts on ecommerce sites generate vast volumes of data for these ecosystems, and it is tricky to figure out how to safeguard it effectively.

The data security advantage can be understood in terms of centralization vs. decentralization. Large swaths of data have been taken from ecommerce firms when attackers have accessed centralized servers. Blockchain is a distributed ledger, so all your information is decentralized – making it is extremely challenging for someone to succeed with an attack.

The security of information within blockchain is high in large part because of the distribution, which requires a nefarious party to infiltrate every one of the system’s nodes. The strengthening of security is an obvious plus.

Impact #2 – Regulations

May 25, 2018, was a major day for data privacy regulations that effect any organization that sells products to or allows user account creation by European citizens. On that day, the European Union (EU) began enforcing the General Data Protection Regulation. GDPR compliance is critical for all organizations that do business (even if it’s entirely virtual) in Europe. Failure to meet its guidelines could result in fines as high as 4% of the firm’s prior-year annual global revenue. Since the protections of the GDPR are mandatory and are hence the subject of audits and potential investigations, it will have a major influence on ecommerce. The optimal security of blockchain could increasingly be seen as a go-to best practice.

Impact #3 – Simpler receipt and warranty access

One other advantage of the transition to blockchain has to do with access to and storage of receipts and product warranties, as indicated by managing consulting company Accenture. As a consumer, you may not be able to find warranty paperwork for a repair or a receipt for a return. We would no longer have to worry about this paper trail (except as a hard-copy backup) once all of these files are within the blockchain. It would be simple to verify proof-of-purchase, because everyone would be able to see these files with the right login and permissions, and it would be a location for central information access usable by everyone involved, from customers to retail stores to manufacturers.

Impact #4 – Future-proofing

Blockchain is becoming prevalent in part because it is seen as the wave of the future – as a necessity really – given the increase, over time, in threats to the industry. Quite literally millions of people worldwide could be affected if the world were to fail to adopt a stricter security paradigm; needless to say, retailers would be hurt by lack of security foresight as well.

When we talk about blockchain, we are not just discussing something for the era ahead, of course. It is essential for organizations to understand that business-as-usual with data security will not cut it moving forward. DLT companies will continue to come up with new innovations that will likely boost the number of blockchain implementations – allowing ecommerce to maintain safety for the years ahead.

Impact #5 – Lower expenses

According to retail content firm Total Retail, an ecommerce firm can benefit from efficiencies produced through introducing blockchain to their provider network. Partnerships with vendors today take place within disparate environments. Many of the expenses of conducting retail online will drop as secure and private engagement becomes possible via the deployment of a unified blockchain platform. 

Impact #6 – Multi-retailer loyalty programs and personalized promotions

If you currently belong to any loyalty programs as a consumer, you can probably appreciate the better freedom that would be allowed by connecting the programs of different shops, allowing you to decide any of them where you would rather collect a reward, noted Accenture. With blockchain, you could garner both those benefits. Your loyalty points and purchase record would all be stored within the blockchain. You would control your loyalty data and determine which ecommerce companies you wanted to see it. 

Impact #7 – Transparency

Ecommerce companies have increasingly criticized their competitors for being too opaque with their customers. Some new platforms are using blockchain as their centerpiece. Transparent transactions are inherent to distributed ledgers. Unilever, Walmart, eBay, Alibaba, and Amazon are all investing in blockchain research – recognizing its manifold benefits.

Impact #8 – Payments

The improvement of payments is also a central focus of the distributed ledger model. There is approximately $9 trillion in coins, paper, checking accounts, and other traditional currencies worldwide, per Accenture. The use of cryptocurrencies is currently at 6% of that total and increasing.

You could make payments straight to machines, as with a car rental service. If a digital wallet were in use for the vehicle, you would not need any human help and could simply pay and get into it. You could minimize fees by intermediaries. It would not be necessary to pay beforehand, and you would not have to wait.

Impact #9 – Reducing fraud and improving quality

Consumers can get hurt when they use unsafe counterfeit products. These products also take profit away from legitimate businesses. Early adopters of blockchain within the food industry are using it to thwart the health issues of eating illegal renditions of products; other retailers simply want to maintain the integrity of their products. With DLT, people throughout the supply chain will be able to validate the integrity of goods prior to sending them out to customers and stores – enabled by the transparent community sharing of quality and authenticity data.

Impact #10 – Content payment

You can now directly receive compensation for payment through sharing of ad revenue on some platforms. In these scenarios, the users of the social media site can give each other upvote rewards that are equivalent to cash. Steemit is a system that currently is designed in this manner (although there are other options in this category as well, as recommended by steemit user sature and offering somewhat different models – including AKASHA, e-Chat, Minds, Nexus, Qbao, Sapien, Scryptio.io, Sphere, Synereo, Yours, and YOYOW). Proceed with due diligence: research these companies carefully since some may rise to the top while others go belly-up.

The way Steemit works is that users help to curate the site and bring more valuable posts to the top. The service in turn hands electronic tokens to the users. As the process continues, e-wallets are used to create a blockchain transaction. You can take payment in whatever currency you want. Also, there are no delays or extra processes with intermediaries (as noted above), so everything moves more quickly and seamlessly. The alternative platforms to Steemit are also integrated with the blockchain and have the same basic benefits for users.

Impact #11 – Review credibility

It is very important to consumers that they be able to know the quality of review platforms when they are trying to assess functionality, support, and other aspects. The basic problem with the way that review platforms have operated thus far is that user legitimacy is validated but not to any rigorous degree (referring to checks and balances for the true identity of any user, not the access controls for any established user account).

Since that is such an issue, the objective of these innovative platforms is to do away with fraudulently created reviews, whether poor ones written by rivals, gushing ones written by the business itself, or of other types. Zapit is an example of a blockchain platform that works to better validate reviews by leveraging the DLT. In order to make the setup as mutually beneficial as possible, these systems incentivize credibility by paying moderators and review authors alike. Some purists believe that the Bitcoin blockchain is the only truly relevant one through which the blockchain is expressing its full benefits; after all, its community is huge at 22 million Bitcoin wallets established worldwide as of July (per Bitcoin Market Journal), which makes it extremely difficult to conduct fraudulent mining.

Impact #12 – Greater respect and directness with ads

The gap that exists in between consumers and online stores should be minimized as possible to allow for stronger efficiency, better speed, and easier management. Again, many people appreciate this type of system because of lack of intermediaries when sending advertising to browsers. One way that blockchain is used in this way is the Basic Attention Token – which confirms the views of ads that are engaged and monetizes them.

Your blockchain system

Securing and validating ecommerce transactions is challenging. However, with the advent of blockchain, identity and data safety and integrity are getting a huge boost.  By integrating the blockchain into your online sales approach, you are able to store your data in immutable form and in a manner that is validated by all users within the community. Beyond concerns with the blockchain, you need servers to run your ecommerce systems; and that infrastructure must also be highly secure. At Total Server Solutions, our data centers are certified to meet the parameters of the gold standard in ecommerce, SSAE 16. See our ecommerce solutions.

Cloud improves scalability, which has numerous benefits.

Posted by & filed under List Posts.

One of the first things you hear about the cloud is that it helps businesses grow because it improves their scalability. What is scalability really though? Why is it critical, and so critical, to business success? How does cloud technology fit into the picture of scalability?

Certification training firm Linux Academy defines scalability within computing as a characteristic of a “system in which every application or piece of infrastructure can be expanded to handle increased load.” It’s easier to understand the key reason scalability is important when you look at performance, via an example. Your web application might start to get attention on ProductHunt or a similar service (or for any other reason sees a sudden spike in use). Your servers then suddenly are inundated with an extreme load of requests – legitimate requests that you want to answer. In a scenario that is not highly scalable, such as a dedicated server handling all the traffic, your app or site may crash under the stress. You will lose credibility if your app experiences downtime. You also deliver poor UX during that time.

We will explore the many ways in which scalability is key to your general success and the operation of your IT systems – first briefly addressing the two key scaling approaches, horizontal and vertical.

Horizontal vs. vertical scalability

The two core approaches that are used to improve scalability and build up your infrastructure are the horizontal and vertical approaches:

  • Horizontal scalability: In this form, you simply give more resources – in the form of central processing units (CPUs), upgrading to solid state drives (SSDs), or addition of random-access memory (RAM) – to an instance that already is implemented. This path is simple because your cloud host will already have the servers virtualized and configured.
  • Vertical scalability: In this form, you increase the number of servers you have running so that you can distribute the load across more equipment. Horizontal scaling is necessarily more complicated than vertical because you need to sync all data, apps, and backups; plus, you must monitor, update, protect, and otherwise perform administration on additional machines.

How scalability delivers value to business

There are numerous ways that you can describe the real-world value of scalability; and since it is a core value of cloud (providing some of its flexibility and contributing to other key benefits), we can realize why this aspect of the technology has been so central to its popularity.

Benefit #1 – Speed

There is an untrue urban legend that it takes a year to paint the Golden Gate Bridge, the massive structure that spans a mile-wide strait connecting the Pacific Ocean to San Francisco Bay, in an indefinitely repeating process. While the year-long bridge painting may be a myth, the notion that it is time to reassess your work and improve upon it after a year is accepted in many areas of business (as seen in the many organizations that perform annual IT risk assessments). Meanwhile, designing and building an on-premise data center could take you more than a year. If it takes you that long, it should be unsurprising that some components of it will be outdated by the time you launch it.

Being able to scale allows your company to move more quickly. Think about the notion of building a data center and waiting twelve months to capture and leverage the treasure troves of data being produced every day by your IoT devices. That would lose you sales, but it would also mean you are not creating as much opportunity as you could in terms of user behavior and what your customers want. You are able to get everything going with no delay at all when you have access to cloud servers. You can put yourself well ahead of the competition, or certainly abreast of them, if you are able to generate more insightful data faster than others in the industry can.

Benefit #2 – Mitigating bottlenecks and inefficiencies

Efficiency is a major challenge for business because it impacts your ability to expand now and in the future. You can make your business more efficient through scalability because the business is able to expand as demand allows, potentially into different locations on the planet – while remaining affordable. The efficiency of scalability, a capability that impacts all parts of your business, is seen in how it allows you to keep your production strong while limiting your costs.

Benefit #3 – Management consistency 

As with efficiency, you are able to become more consistent holistically, throughout your management ecosystem, when you have optimized your scalability. The value of becoming more consistent almost cannot be overstated. It allows you to become more compliant with your own guidelines and governmental regulations (think the all-new General Data Protection Regulation, or GDPR, from the European Union), while also bolstering the amount of revenue you generate. 

Benefit #4 – Future-proofing

Scalability matters because it readies you for the data challenges of the future. In fact, it is so central of a concern that CX consultant Benjamin Payne calls it “the most critical factor to consider when selecting a knowledge base solution.” The reason you are future-proofing your business is that anything you are doing to make your organization work better now should be able to stay with you as you develop without you having to switch horses midstream.

It becomes obvious that scalability is helpful for the future when you recognize the sheer pace at which data is expanding. Since data is now at a point that businesses are increasingly being overwhelmed by it (unable to use it to its full advantage), effective management of the data is a top priority. You need to properly maintain and safeguard the systems in which your data resides. Optimizing your scalability will allow you to stay relevant and on the path to your objectives at all times.

Benefit #5 – Meeting current needs

Scalability may sound like a flat yes-or-no prospect, but you are actually able to scale in the manner that best makes sense as you expand. Whatever your demand is at the moment, you can scale up or down as needed when you are using cloud infrastructure. Since you are building as you go rather than in single swipes, you can deviate from a single standard path, creating a hybrid cloud through private and public components and/or a multi-cloud environment that utilizes various vendors. By building the system you need as you go, you can get what you need for each segment of information – the right safeguards and space – in a seamlessly integrated yet sophisticated (and sufficiently diverse) infrastructure.

While scaling does not mean you are stuck to one approach or provider, it does mean your approach is generally more straightforward. In-house infrastructure, on the other hand, introduces so many complexities that you are not able to spend as much time on innovation because your focus is more squarely on maintaining your current systems.

Benefit #6 – Avenues for automation

Cloud’s scalability allows you, through a different service delivery model, to respond to demand on the fly – and through the work of a third party. By offloading that aspect of your systems, you are able to better automate your infrastructure, transferring workloads to underutilized parts of your infrastructure from ones that are overwhelmed.

Benefit #7 – Relevance of data

If you scale, you are able to maintain the efficiency that allows your application or website to remain relevant. In order to stay relevant, you must be able to assess what is there over time and update it as needed. Everything must grow holistically if you want your organization to be balanced – and that comprehensive approach is advanced through scalable cloud solutions.

Benefit #8 – Downturns or macro changes

If the economy were to slide into a recession or if other macrocosmic changes were to occur, you could hit a wall and go bankrupt. You can transition as you go if you have constructed a scalable business. Scalability is key here no matter whether you need contraction or expansion. If the market slows down suddenly and you have the ability to scale down quickly, you may find you collect more of the market since rivals will be sinking. However, you may simply need to scale back and stay smaller to weather an economic storm.

Benefit #9 – Flexibility

When you are relying on cloud systems rather than legacy ones to back your systems, you are able to benefit from flexibility that becomes possible when you innovate with cloud providers. The infrastructure is highly flexible, along with delivering a vast sea of memory capacity and impeccable performance. The scalability that becomes possible through third-party partnerships allows you to leverage your core strengths and tap into the expertise of technology vendors. The flexibility that is inherent in scalable solutions allows for better business agility to handle any turbulence or changes of the winds.

Part of the flexibility of scaling is in its reduction potential. Beyond the economic downturn notion discussed above, scaling will have its own benefits for you in allowing you to reduce scale as needed.

Benefit #10 – Business enablement

By making it possible for organizations to grow and shrink in the way that is best for current conditions, scalability is a business enabler. Consider the services that are entirely based on putting people together using IT services. Ride-sharing organizations do not have fleets of cars, only supply-and-demand infrastructure that allows drivers and passengers to connect. Provided the IT infrastructure is scalable enough to grow rapidly and in tune with demand, you can leverage that scalable computing structure to expand your business – using the cloud itself to propel your growth.

Moving forward with improved scalability

Are you in need of a highly scalable solution for your organization, to experience the many benefits described above? At Total Server Solutions, our cloud platform is built for optimal scalability, as well as incredible speed and reliability. Get the only cloud with guaranteed IOPS.

content distribution network

Posted by & filed under List Posts.

A content delivery network is designed to send video, images, JavaScript, HTML files, and other content to users through a distributed server network. Generally the two protocols used for CDN delivery are HTTP and HTTPS; however, sometimes other protocols can be useful, as with video.

The primary reason CDNs are used is to improve content delivery performance. Before the rise of the CDN, you would have webpages with text and images on them. In order for all those elements to populate properly, you would have to send through dozens or hundreds of HTTP requests. Every time a request is sent, that means your browser is establishing a connection with a server, letting it know what it needs, downloading the information, and presenting it. You could have everyone connect to the same server. However, you can achieve better performance if you are able to get content as physically close to each user as you can.

CDNs are on the rise because of the considerable increase in the use of video and cloud services over the last few years. Reasons why CDNs are becoming so popular go beyond these trends, though. Once we look at why use of these systems is generally expanding and the benefits derived from them, we will take a closer look at how they work.

Video and cloud fueling CDN growth

CDNs have increased in recent years with the proliferation of video as a standard business tool. Video is resource intensive in terms of how much bandwidth it uses, as well as how much disk storage it needs.

The increase in cloud service and app use is sparking additional CDN expansion. Today it is considered commonplace to store essential company information in a third-party facility. CDN services are often offered in conjunction with cloud data and video storage since the two services bolster performance beyond your walls.

You will lower the amount of stress you are placing on your own data center by using the servers of the CDN to process and transmit data. Hence, you do not need as much hardware yourself, explained Kevin Tolly. Because you do not need as much equipment, that also means you do not need the facility space, cooling, and power that support it. Your capital and operating expenses decline.

Reasons for using CDNs go beyond lightening your load and cutting costs though. Here are 10 other key benefits:

Benefit #1: Distributed denial of service

DDoS attacks are terrible for websites, leaving you unable to respond to legitimate requests for hours as you are inundated with massive amounts of fraudulent traffic. As noted by technology author Simon Jones in TechFruit, a DDoS event means you cannot help prospects and customers during that time. You will be unable to usher leads and sales through your system. By taking advantage of the third-party infrastructure of a CDN and its DDoS protections, you can deliver high security and consistent service.

Benefit #2: SSL termination 

Content delivery networks do not always deliver static content but instead are simply a go-between from the application to the user. In this manner, CDNs are used both as a way to prevent breaches and to change the manner of the connection, freeing up the application’s servers. For application programming interfaces (APIs) and other highly active applications, organizations often decide to use CDNs for secure sockets layer (SSL) termination, which is a type of SSL offloading.

The way an SSL connection works is that you authenticate via software called a certificate prior to transmitting encrypted information from the client to the server. If you use a CDN for SSL termination, you are pushing work that must be conducted by the server (the authentication) to different hardware outside your own.

Benefit #3: Speed

The CDN will deliver the experience that consumers expect from the Internet. The reason that people like to shop online is due to its convenience, speed, and immediate support. E-commerce is no longer an enticing proposition when your servers are going down or there are unexpected delays. When your servers are not delivering speed, people will go to a site that better respects their time and delivers a more solid user experience.

Keep in mind that CDNs are increasingly needed as sites get more popular. Traffic is a good thing until it overloads your site and leads to widespread performance issues. You will ensure that you give the best possible UX to your visitors and are unlikely to see latency issues when you send traffic through a CDN.

Benefit #4: Worldwide reach

Related to the issue of speed, a content delivery network can be especially helpful when you have users spread out across the globe – because a well-designed CDN will be distributed internationally. Although there are advantages to using a CDN for any organization, those that will get the biggest benefits from them are those that have customers in the US, Asia, Europe, and other locations worldwide. In those cases, the UX for users across the planet will be bolstered. On the other hand, a website that has its infrastructure housed in New York and whose primary customer base is in New York will not see as strong of performance gains since geographical distribution is not a problem.

Benefit #5: Simpler operation

The use of a CDN offloads the amount of work that is performed on-premises. The operation of your organization’s IT should become simpler. Having less hardware under your own roof means you do not have as much equipment to store and maintain.

Benefit #6: Enhanced security

Along with optimized content delivery performance, security is also core to these systems – and it goes beyond the advantages of SSL termination and DDoS mitigation. Managed web application firewalls (WAFs) and bot defenses can be added. You get better longevity out of your in-house equipment by using the security features of CDNs to limit the load that goes through your internal security equipment.

Benefit #7: Image optimization 

You can also often combine CDNs with image optimization services. Whether someone is accessing you from a cellphone, desktop computer, or anything else, your images can be dynamically optimized for them. 

Benefit #8: Surge-ready / slashdot-friendly

Perhaps the worst thing that can happen to your site is to get a huge amount of real (non-DDoS) traffic – a massive opportunity – and then not be able to handle the influx.

This issue often arises when a large site links to a small site, pushing through a large number of referred visitors. This process, called the slashdot effect or simply slashdotting, results in a traffic surge that could render your site unavailable or at least very slow. Slashdots are not the only way you might become overwhelmed by traffic, of course. You might generate the traffic more directly too, noted Jones. For example, you might run a contest that creates excitement or have an ad campaign that goes viral.

If you ever have a situation in which big traffic comes through unexpectedly, you may not be able to handle the load, resulting in crashed systems. Using a CDN service will enable seamless and reliable operation in these cases. It will be able to take care of those large bursts and keep up with the demand – typically much more successfully than the origin servers can.

Benefit #9: Reduce your data center footprint

Reducing the carbon footprint of data centers is a critical concern for many organizations. The extent to which IT infrastructure is a sustainability issue is clear in the findings of a 2014 report from the National Resources Defense Council (NRDC). The NRDC determined that data centers at that point produced 200 million metric tons of carbon dioxide and used 3 percent of energy generated worldwide. Plus, the problem is growing more severe, with consumption expected to grow from 91 billion kilowatt-hours (kWh) in 2013 to 139 billion kWh in 2020. While using a CDN service does not do away with the issue of sustainability, tapping into the greater efficiency of a CDN will mean you are more environmentally friendly.

Benefit #10: Branch-office services

Content delivery networks are attractive to branch offices of companies – because branch locations must have content that uses the main location as the origination server and the CDN to deliver and store it. The branch can get DDoS mitigation protection and WAF service through this same process – allowing them to push work that is conducted by hardware at the branch to the CDN edge.

How the CDN works

Now that we better understand why the CDN is so popular, let’s take a closer look at what it does. It operates as a proxy for your resources and services. When your customers enter your web address, they will be redirected automatically through the domain name system to the correct CDN. Since that’s the case, your customers will usually not know a CDN is being used.

Leveraging dedicated points of presence (POP) itself and via outside internet service providers, the content delivery network is able to locate the content servers. Instead of using internet links, a CDN will often utilized private, dedicated lines to interconnect its edges.

Requests from your users are monitored and processed at the CDN edge. If the network is being used for its likeliest use case, static web content is inspected to see how updated it is prior to delivery. CDNs also make it possible to stream video stored within a CDN, in which case you do not need to burden your WAN or your own servers. Software updates, product catalogs, and other customer-facing files can be stored with your CDN as well – in which case they are also accessed and delivered straight from the CDN.

A high-performance CDN

Other reasons to use a CDN include geoblocking, better SEO, and higher resilience. While there are many different reasons to use one, the top benefit is speed – which relies heavily on proximity. At Total Server Solutions, our CDN nodes are close to your customers, wherever they are, leading to happier viewers, fewer abandoned carts, and more completed transactions! Get the reach you need.

sheep bleating as communication -- importance of communication to ecommerce -- common problems and solutions

Posted by & filed under List Posts.

Understanding how to communicate effectively to online shoppers is a question that every ecommerce company must ask constantly and from many different perspectives. One way to look at ecommerce communication is in terms of problem-solving. Here are seven common mistakes, along with solutions: 

Mistake #1: Not communicating enough 

Solution: Understandably, the work of running an ecommerce business never ends, with marketing campaigns to run and orders to ship. Communicating with customers can end up low on the priority list. Taster’s Club founder Mack McConnell told Arianna O’Dell that he learned to communicate with customers soon after they placed an order – that human interaction was particularly critical at that point. 

Mistake #2: Limited channels 

Solution: Expand your toolset. Part of communication is about giving people different avenues through which to talk. Key ways that you can communicate in ecommerce that you probably already have adopted or have considered incorporating, as suggested by Ajeet Khurana in June in The Balance Small Business, are:

  • Email – For electronic commerce, electronic mail is essential. You want an email address as a point of contact for customers, along with a ticketing system that makes it possible to respond to numerous similar emails simultaneously.
  • Phone – When you think about having support people available by phone, many think it does not justify its resource consumption. However, phone personnel are valuable in answering questions through a channel that is most comfortable for some buyers.
  • Live chat – Many shoppers look for live chat so they can get questions answered immediately through your site. A wait time for live chat is generally acceptable to customers because they can keep using the computer as they wait. Like phone, live chat is resource-intensive, but it is still popular since customers often prefer it.
  • Blog – Blogging gives you a way to communicate with customers and potential customers through content that is put up periodically over time, refreshing the site’s language. Your blog will keep your site integrated with the present day (because even if you are writing evergreen content, you are inevitably using sources and topics that are more recent with each piece), and it also helps you share your thought leadership and search authority.
  • User-generated content (UGC) – While it is important for you to share knowledge and ideas with customers through your blog, it is also an excellent idea to build community by tapping their thoughts. People will be likelier to feel loyal toward your site if it allows them to submit their own comments, reviews, and other thoughts through forums and other channels, via text, image, and video.
  • Ads – Advertising is inevitably costlier than you want it to be. Think of your ads straightforwardly in terms of “staying on message” (with an eye consistently toward communication), even as you adjust and tweak to get the best possible response.
  • Product descriptions – Last but certainly not least, you want to think about how you are communicating through the ways you are describing your products. It is very important to your search-engine rankings that you change the default, stock text from the manufacturer to your own. Otherwise, it is not original content; Google, Bing, and other search engines gives significant weight to content originality. As an indication of why, the mission statement of Google states that the company intends “to organize the world’s information.” Google is in the business of information. If your site is not feeding it new information, in the form of new content, it will not appreciate your site as it does others.

Mistake #3: Failing to leverage social media 

Solution: Social media is another key form of communication that deserves its own attention. It can be confusing to determine the extent to which you want to invest time and resources in various platforms, but social media generally can give you a stronger community and greater brand awareness. It also gives you an environment in which to tell your story. To be clear, though, you aren’t just opining on social media but leveraging the opportunity to spark discussion. 

Four of the most common social media sites to connect with people today are Facebook, Instagram, Snapchat, and Twitter. While Facebook is broader in its focus, Instagram in known for images, Snapchat for short video, and Twitter for shortform versions of posts. You don’t have to stick to strict use of a platform only for a certain type of content, but you can build the same messages into various formats for use on each platform. Much of the choice of focus in social platforms will have to do with the users of it. Study your audience to determine where they are, how old they are, and other demographics. Using that data, find the social platform popular with those groups. 

Mistake #4: Neglecting to encourage feedback 

Solution: Collecting and analyzing customer feedback is key. The data and comments they provide give you valuable insight into how people perceive your brand and how customers are discovering you. McConnell also noted that gathering information from customers allows you to know what you need to fix. 

Mistake #5: Not considering storytelling 

Solution: There has been great discussion of storytelling as a marketing tactic, as a way to get the engagement you need to keep people on your site and returning for more. When you tell a story about your brand, you are able to get across information about your product within an image you are controlling.

There are many ways for storytelling to become part of your communication. You can use TV or radio ads if you have the money for that. Storytelling can also be used within your blog. Whether you are centered on storytelling or more specifically on information sharing, your blog is a good place to express knowledge and create intimacy.

To integrate storytelling, use these tips from Thibult Herpin in E-Commerce Nation:

  • Convey a positive image. Inspire people and get them excited by showing them testimonials or otherwise showing your products in a pleasant light.
  • Be accessible. Use a story and protagonist to which your prospects can relate.
  • Be emotionally charged. Help build interest in your story by using language that will stimulate emotional response from your customers.
  • Get granular. You can help people see the world of your story with details. Be careful as you get granular, because you want your information to be helpful and not overwhelming.

Storytelling as it is often explained, with the creation of character and setting, is not necessary for or appealing to all companies but is certainly interesting to explore related to content.

Mistake #6: Boring standardized emails 

Solution: Order confirmations, delivery information, shipping status, and other transactional messages do not have to use the same sleep-inducing text that is built in as default by the ecommerce platform, advised Richard Stubbings in PracticalEcommerce. You can customize the language to better engage with your customers. Point out your return policy details and steps they can take to solve problems. Give them tracking information, the way that delivery occurs, and consider tying in loyalty promotions or other discounts. 

Mistake #7: Uninspired 404 error pages 

Solution: Typically when you get to a 404 error page (also called a 404 Not Found), it simply tells you that you have reached the server but it could not find what was requested. A/B testing company Crazy Egg noted that optimized 404 pages should explain what went wrong, speak plainly or humorously, give them paths to stay on the site (such as a link to the homepage and a search bar), and should maintain the same design theme as the rest of your site. There is another way to go after the 404 issue, and that is by fixing broken links. Check for missing media and articles once monthly.

Mistake #8: Empty search pages

Solution: When you offer a way to search your site, you are also opening yourself up the possibility of sending your customers to dead ends – search results with “no results found.” Instead of accepting those dead ones, change that page so that you link to the categories to which the product belongs and/or recommend similar products as alternatives so that they can keep browsing.

Mistake #9: Standard abandoned cart emails

Solution: It is a good idea to seriously consider your abandoned cart messages and whether you believe in sending them, as indicated by Stubbings, who argued against them. If you keep sending them, measure the conversion rate from them so you know if they’re working. Also, be certain that the site did not turn down the order and the customer did, in fact, leave behind their cart. The worst-case scenario is if a shopper tries to buy from you and gets denied because their payment doesn’t work or you don’t mail to their location. If that person gets an email that says they can get a discount to purchase the same item, they would be frustrated if they again were blocked from purchasing.

Mistake #10: Complex unsubscribe process

Solution: Whenever you sent out a marketing email, make sure there is a clear unsubscribe link. If someone is thinking about leaving and finds the process easy, they may sign up again for the list. Difficult unsubscribe processes might have the following characteristics:

  • When you click the Unsubscribe link, you hit a 404 error message.
  • When you get to the Unsubscribe page, you have to enter your email address – in some cases a second time for verification.
  • When you get to the confirmation page, it tells you that you will get an email to finish the process or that it will take effect in 7-10 days.

You do not need to slow people down when they want to exit, and it is not in your best interests. “Be as graceful as possible,” said Stubbings.

Mistake #11: Failing to focus on infrastructure

Solution: Are you wanting to communicate effectively for the strongest possible ecommerce growth? One way that you communicate is through the performance of your site. By using a powerful infrastructure to back it, you save your customers and prospects valuable time by more quickly serving them everything they request.

At Total Server Solutions, our infrastructure is so comprehensive and robust that many other top-tier providers rely on our network to keep them up and running. See our high-performance web hosting for e-commerce.

cloud computing spending 2018, along with types and beneifts

Posted by & filed under List Posts.

An analysis that was released in January found that public cloud spending would exceed $160 billion in 2018, and by 2021, would almost double. The United States was allotting more money to cloud than any other nation, with China expected to shift into the number-two slot – moving past the United Kingdom, Germany, and Japan – by 2021.

The report, which was released by the International Data Corporation (IDC), found that the amount spent on cloud was rising at 23.2% in 2018 vs. 2017, achieving $160 billion. It then was expected to continue to grow at a slightly less breakneck pace of 21.9% through 2021 – hitting $277 billion at that point.

Discrete manufacturing was projected to spend more on cloud that any other economic segment, spending $19.7 billion on it. Directly below discrete manufacturing were professional services at $18 billion and banking at $16.7 billion. Below that were process manufacturing and retail, which will both spend over $10 billion.

To better understand growth of the cloud, we can look at it through a series of questions:

  • What is cloud computing, and why is it used?
  • What are the three primary types of cloud?
  • What is the history of cloud?
  • What other research suggests fast cloud growth?
  • What are the primary benefits of cloud?

What is cloud computing, and why is it used?

Cloud computing is a technology described by Merriam-Webster as meeting two chief specifications – it allows data to be accessed over the internet, and it stores the data on multiple servers. Through cloud service providers (CSPs), firms can lease access to storage and applications instead of having to run their own data centers or infrastructure.

One reason many organizations say that they use cloud is because they do not have to spend as much initially. Companies also like that they do not have to worry about the challenge of purchasing and upkeep for their own IT equipment and facilities. They can instead pay for whatever they need through third parties. Meanwhile, CSPs are able to achieve substantial economies of scales by providing many different customers with the same services.

What are the three primary types of cloud?

The basic types of cloud are infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-a-service (SaaS).

IaaS, also called cloud hosting, is an arrangement in which the vendor creates and supports hardware that they configure and virtualize, allowing for their customers to deploy computing resources or virtual servers but not having to invest in physical machines or handle all the challenges of managing servers. IaaS generally includes the servers, storage, networking, and virtualization. Aspects of cloud servers that customers may have to handle include applications, databases, security features (although cloud may be housed in an SSAE 16 audited facility), and the operating system.

Platform-as-a-service allows developers to move away from operating system updates, applying security patches, and other hardware-related tasks to focusing on coding, testing, and releasing apps. Tools for version control systems, monitoring, and traffic splitting are often tied into platforms, along with application programming interfaces (APIs).

Software-as-a-service allows users to access an application that is provided by a host via the internet. Dropbox and Salesforce are examples of prominent SaaS applications. SaaS allows users not to have to worry about backing up the systems, updating them, supporting them, or development of the code.

What is the history of cloud?

Not everyone sees the origination of cloud computing the same. Some say that cloud computing goes back to the 1960s and JCR Licklider, who suggested the idea of an “intergalactic computer network.” In 1969, Licklider enabled development of the Advanced Research Projects Agency Network (ARPANET). He wanted people throughout the world to be able to access whatever data and applications were running at any location from any other place.

Other people say that the person responsible for cloud is John McCarthy, who suggested that you could have a public utility model for computation.

Cloud had developed via several lines, most recently via Web 2.0. Cloud has not been a mass-delivered product until recently, because significant bandwidth only started to become possible through the internet in the 1990s.

What other research suggests fast cloud growth?

The finding in the introduction is just one indicator that cloud is on the rise, in various ways. For example, another finding from IDC is that more than one-third of IT spending was currently going toward cloud. With increasing amounts of money going toward public cloud and to private clouds built on-premises, the amount spent on traditional on-premise computing is dropping.

Per Gartner, half of the worldwide companies that have adopted cloud will be shifting 100% of their systems to cloud by 2021. This growth is fueled by an expanding need for management, security, application, and infrastructure services through outside parties. In 2018, worldwide spending on cloud will hit $260 billion, rising from $219.6 billion in 2017. This rate of growth is higher than previous analyst forecasts.

What are the primary benefits of cloud?

There are numerous ways in which companies benefit from cloud adoption:

The cloud enhances collaboration. There are frequent requests for additional cloud systems at 79% of organizations, per the Cloud Security Alliance, with file sharing environments being one of the top solutions of interest. Collaboration is a central characteristic of cloud, since you can access from any location and edit so that updates are applied centrally.

The cloud has strong security. While organizations used to shy away from the cloud for its security, today it is seen as an asset. Facility access is highly controlled, and hardware monitoring is continual. In fact, cloud has been promoted as more secure than on-site infrastructure due to the strong focus on security best practices at these firms. “Very experienced staff maintain these infrastructures, processes are tight and there are many eyes on these systems at all times,” noted Zach Lanich.

Cloud improves agility. You are able to better predict time-to-market with cloud, with less full-time equivalent (FTE) because IT projects are shortened by being able to get your resources on-demand. You will have greater agility, leading to a stronger competitive stance since you are able to product results more quickly and inexpensively. One industry observer noted that he saw the use of cloud for a data analytics project allow a steep drop in cost and significantly better time-to-market, with a drop in delivery time from 4 months to 3 weeks.

Cloud does not need as much capital. One of the main struggles that startups have from the beginning is being able to pay their staff and succeed with their business model. It can be very expensive to fund servers if you are buying them. Cloud is a way to avoid those big costs of a server that you purchase. You just pay for your storage and processing needs each month. Plus, the systems are updated automatically since the cloud provider is in charge of all updates. There is no need to pay for equipment upgrades. You get the service you need without the hassle.

A strong cloud partnership

Cloud spending is increasing for all the reasons described above. Do you want to take advantage of cloud for your organization? At Total Server Solutions, with our SolidFire-SSD-based SAN storage, we are able to provide IOPS levels that are unmatched by virtually any other cloud hosting provider. We do it right.

DDoS history -- distributed denial of service attacks

Posted by & filed under List Posts.

With the rise of the Internet of Things (IoT), experts have warned that it is incredibly vulnerable from a security perspective – and it has been exploited by DDoS attackers. In September-October 2016, nearly 50,000 connected devices, spread out across 164 nations, were used to achieve traffic as high as 280 Gbps. The attack sent traffic into networks of targets primarily sent from digital video cameras. Following that attack, security journalist Brian Krebs was hit with a massive assault – followed by one that achieved a whopping 620 Gbps on DynDNS. The DNS firm had to protect its infrastructure against a packet rate that got up to 100 Mbps – a real-time issue that caused was a bigger problem for them than was the peak bandwidth.

How did we get here?

Three markers of the rise of DDoS

In three key ways, DDoS has expanded over time:

  1. Increasing degree of sophistication – While SYN floods used to be leveraged for DDoS attacks, today is about intricate attacks that go after services, infrastructure (VPS, firewall, etc.), software, and bandwidth (called multi-vector attacks). Multi-vector attacks required skill initially; however, as cybercrime advanced, it became possible for anyone to launch these attacks.
  2. Increasing frequency – Today, anyone can perform a huge DDoS attack as DDoS has been weaponized. The rate of occurrence of attacks has grown, as has the occurrence of huge attacks. Reports from the first quarter of 2018 showed that DDoS attacks were growing in frequency (as well as in length and size).
  3. Increasing volume – The size of DDoS attacks became larger with the incorporation of IoT botnets and use of new innovations such as reflection and amplification. Because of these factors, the attacks of recent years are much larger than the ones that were sustained by ISPs in the late 90s.

Timeline of DDoS development

We can get a better sense of DDoS evolution by looking at a timeline of major events related to these attacks – which takes us back to the early 1970s:

1973 – It is difficult to determine the exact date of the first denial of service (DoS) attack, but Robert Lemos suggested in eWeek that the initial one may have occurred in 1973 (according to an unverified story told by David Dennis, adjusted to account for a probably mistake that he made in the year). The attack was said to have occurred on the Programmed Logic for Automatic Teaching Operations (PLATO) system at the University of Illinois at Urbana-Champaign (UIUC), which was used for instruction and as an online community (a precursor to the Internet). Dennis claims to have caused it as a 13-year-old high school student, when he wrote a program and deployed it to users of PLATO, causing many of them to have to restart simultaneously. He claimed to subsequently use this same technique on several networks locally and nationally, and that he was successful until the ext command was changed.

1995 – Manual DoS protest attacks were conducted by activists in the late 1990s. These activists started to think of the Internet as a place that could be used as a form of protest, through access prevention. The Strano Network was one of the first groups to engage in this activity.

1998 – This year was when the distributed denial of service (DDoS) emerged (although it would not become widely notorious until 2000). Floodnet was a tool that could be downloaded and run on the computers of users. It was created by another group of activists called the Electronic Disturbance Theater (EDT). The tool would then start going after various sites, following a list supplied by the EDT. This same year, cybercriminals started using simple but effective Smurf attacks, which leveraged the Internet Control Message Protocol (ICMP) to prompt other servers to ping a target. These attacks were the first prominent instance of reflection/amplification attacks.

1999 – The Trinoo bot, made up of 227 infected Solaris servers, was used to attack the University of Minnesota.

2000 – The first DDoS attack to get significant press occurred when Mafiaboy, a 15-year-old Canadian boy, brought down various major corporations, including Amazon, eBay, Yahoo!, and Dell. The Computer Emergency Response Team (CERT) Coordination Center also noted that there would be more DDoS attacks that amplified bandwidth by using the domain name system (DNS).

2003 – Worms had become ever more problematic for system administrators in the beginning of the century. The 376-byte MS SQL Slammer worm, the first flash worm, was let loose in 2003. This worm’s speed was unprecedented: it doubled the number of infected systems every 8.5 seconds, and overloading network bandwidth in just 3 minutes.

2005 – 8 Gbps was the largest amount of DDoS traffic that was reported by any respondent in the annual Worldwide Infrastructure Security Report (WISR) from Arbor Networks. (Compare to today’s figures below.)

2007 – A statue was moved in Estonia that honored World War II Soviet soldiers who fought against Nazi Germany. Diplomatic issues arose between the two states because of this decision, and Estonia suffered repeated DDoS attacks.

2008 – Anonymous started a series of actions, including against the Church of Scientology, in which they defaced sites or hit them with DDoS attacks.

2011 – Sony fell victim to a massive DDoS attack. This attack seemed to have been used as a distraction as the thieves stole PlayStation Network customer records.

2013 – At 300 Gbps, the most massive DDoS of all time was measured. This attack hit Spamhaus because the organization had named the hosts of botnets, spam networks, and cybercrime outfits, as well as blacklisting them.

2014 – On Christmas Day, Xbox Live and the PlayStation Network were hit with a DDoS attack, with Lizard Squad taking credit for it.

2016 – Politically motivated DDoS attacks were central to this year. The US Department of Defense was pummeled with a barrage of spam in late January. The Russian military was similarly hit with a DDoS attack in March. The Reaper (IoTroop or IoT_reaper), a botnet built by North Korea, continued to become more powerful. Qihoo 360, a Chinese web security company, reported that The Reaper had enslaved 10,000 devices, all of which were interacting with the cybercriminals’ servers regularly. The botnet had millions of IoT devices that it could potentially add via an automatic loader. There was an attack of 500 Gbps that lasted throughout the Olympics in August. As DDoS took center stage with Mirai, an attack that peaked at 620 Gbps was carried out by an IoT botnet against Brian Krebs.

2018 – Memcached was used to attack Github. In this event, there was a disruption of approximately 10 minutes. Per the engineering department at Github, 1.35 Tbps of traffic was targeted at the collaborative-software service. The Memcached protocol was subsequently shown to enable amplification through web-connected servers by a factor of as much as 51,000. Through this protocol, it was able to wage a simple attack and then amplify it, slamming a network with much more sizable packets.  There was a major blow to criminal DDoS efforts when Webstresser was shut down by authorities of the Netherlands, the UK, and the US. The organization’s leadership was arrested. Webstresser is credited with causing 4-6 million DDoS attacks between 2015 and 2018. It caused that much havoc by offering DDoS-for-hire services.

DDoS

Denial-of-service attacks have certainly come a long way since they were first deployed in the early 1970s, morphing into ever-more-sophisticated distributed-denial-of-service (DDoS) events. As DDoS attacks have become larger and more expensive, the importance of working with experts on your defense has skyrocketed. Safeguard your site against the hassle and expense of a DDoS attack.