2018 Cloud Computing Predictions & Trends Part 2

Posted by & filed under List Posts.

<<< Go to Part 1

 

Security and privacy will be even more important.

 

If you are in a compliant industry such as healthcare, you may hear the words Security and Privacy so much that your eyes start to roll back in your head when you hear them. There is good reason for that obsession when a company could be liable for a federal fine and a round of bad publicity – but all businesses should pay increasing attention to this overarching concern. Consider this: the Equifax breach alone impacted 143 million people. That means the general issue of security/privacy is increasingly getting the attention, beyond business, of the popular press and consumers.

 

Security and privacy are often reasons firms have hesitated to implement cloud. Since that’s the case, let’s look to the opinions of computing thought leaders and analysts. David Linthicum has argued in a couple places for cloud’s strengths; the titles say it all: “The public cloud is more secure than your data center” and “Clouds are more secure than traditional IT systems.”

 

Similarly, in a 2017 Gartner report on cloud security, Kasey Panetta posits that chief information officers and heads of IT security should set aside any concerns they have about moving forward with cloud. Panetta writes that the research firm found security should not be thought of as a “primary inhibitor to the adoption of public cloud services” because the security provided through well-built cloud systems “is as good or better than most enterprise data centers.” One of the major pieces of evidence that the analyst uses to back its claims is simply the number of attacks on cloud vs. those against legacy systems: compared to breaches of traditional data centers, public cloud implementations of infrastructure as a service, or IaaS (aka “cloud hosting” data centers) are hit with 60% fewer attacks. Perhaps this number is in part because attackers do not want to target systems that are run with extreme attention paid to security tools and monitoring (partially to overcome the concerns of clients related to the technology). Whatever the reason, cloud should now be considered beyond safe – safer than traditional alternatives.

 

In order to deliver a strong security stance, you want to approach and build protections as a series of layers. A hacker might peel off one layer, but they are still not able to access your data. Any operation that is engineering a public cloud should have extraordinarily robust security layers in place, as can be verified through standard certifications such as SSAE 16 compliance.

 

With cloud, instead of being able to attack your website directly, a company would have to go through the third-party provider; the effort is instantly more complicated. Furthermore, you can create private clouds and integrate them with your public cloud as desired for additional protection.

 

Public cloud will power more enterprise apps.

 

Clint Boulton (cited in part 1) notes in CIO that enterprises have started to host their mission-critical systems in a public IaaS setting. Examples include Dollar Shave Club and Cardinal Health. SAP and other business apps are deployed by other enterprises in public cloud. The first choice for software hosting will continue to transition to cloud, according to Forrester researcher Dave Bartoletti. Bartoletti says that “the cloud is the best place to get quick insights out of enterprise data,” allowing companies to take their innovative thinking and convert it into technology and intelligence.

 

More cloud lift-and-shift will emerge.

 

Firms will often have systems running on legacy hardware that they want to move to a public cloud. These companies may not just want cloud but could benefit from help getting there. Ideally they are able to rewrite their code to embrace the dynamic nature of cloud platforms. With a focus on developing technology for easier lift-and-shift, people will be able to affordably perform bulk app moves, making the entire process of switching to cloud faster.

 

A cloud-based Internet of Everything will become more prominent.

 

During 2017, smartphones and tablets were used increasingly for communications and ecommerce – causing a surge in both the Internet of Things (IoT) and artificial intelligence, notes WebProNews. In 2018, the IoT will continue to have a dominant presence in computing. However, another type of computing will become critical, with real-time analytics solutions and cloud solutions becoming more sophisticated.

 

As the cloud computing ecosystem becomes smoother and more robust, the IoE will develop as well in its efficiency and streamlining capacities, because it is reliant on machine-to-machine interactions, the performance of systems processing data, and individual human beings engaging with their surroundings. As a result of this growing field, people will be able to communicate with other devices on a network seamlessly, with smarter information. Plus, these systems will allow deeper and more meaningful conversations between different parties.

 

Internet of Things

 

While the Internet of Everything will be a concern in and of itself, it is a little ridiculous to push aside the importance of the IoT. In 2018, companies will use edge computing to better deliver Internet of Things projects. A basic gain of edge computing within a cloud setting is that you lower bandwidth and resource needs by sending analytics findings to central points, rather than transmitting all data in its raw form for processing. Using edge computing in this manner is useful in an IoT setting because the ongoing data is so voluminous. The amount of data is massive and is a strain on servers within a traditional data center. By using cloud instead, a company is able to retrieve whatever data they want, when they want it. Edge computing has become more prevalent both because of its own strengths and because of the ways that AI and the IoT are intertwined. Beyond fueling the development of smart cities and smart homes, IoT is also increasing the use of artificial intelligence platforms. AI tools themselves lead to reduced traffic on your network, faster responses, and better customer retention. Gartner has noted that AI edge computing use cases are starting to appear.

 

Tools supporting the management of inventory control, workflow, and supply chain networks all now have IoT use cases – which will continue to proliferate in 2018, notes AnalyticsWeek. As companies become increasingly dependent on the Internet of Things, they will in turn have to update their business software for the modern era.

 

Containerization

 

Companies are aggressively moving to containers as a simpler model for code management and migration. Organizations are using containers, among other things, to make it easier to move their apps from one cloud to another, says Bartoletti. Generally, they want to be able to achieve faster time-to-market with a cleaner devops approach.

 

Cloud hosting systems now recognize that the ability to integrate the use of containers is key.

 

This emergent method for software portability is essentially a useful technique. However, it will take time to adjust intelligently to the new landscape: networking, storage, monitoring, and security problems will become more apparent as containers become more widely implemented. Often companies will choose a hybrid solution blending private and public components.

 

Choosing the right cloud

 

The above look at 2018 forecasts and trends shows us how cloud is changing on the whole, as a market and in terms of trends that are building. While that overview is helpful, it is also important to consider how you will directly meet cloud need for your business.

 

At Total Server Solutions, you will get the fastest, most robust cloud platform in the industry. We do it right.

2018 Cloud Computing Predictions

Posted by & filed under List Posts.

On November 7th, Forrester Research released its list of top cloud computing predictions for 2018. This release is an event in the sense that it gives the public, individual businesses, and technology thought leadership a better understanding of how this important market and area of knowledge is developing.

 

After all, cloud computing is growing incredibly fast as an industry and economic segment. $219 billion was spent worldwide on cloud services in 2016, per Gartner research director Sid Nag in an analysis released in October 2017.

 

Also in 2016, software as a service (SaaS) spending beat Gartner’s previous market projection by $48.2 billion. As they say, but wait… there’s more – more growth forecast for cloud moving forward. Gartner estimates that infrastructure-as-a-service (IaaS), another term for cloud hosting, is currently clocking a head-turning 23.31% compound annual growth rate (CAGR). To get a sense of that percentage rate growth, it helps to look at the historical United States gross domestic product (GDP). From 1948 to 2017, the GDP has grown at an average annual rate of 3.19 percent, according to economic indicator resource Trading Economics.

 

Cloud computing: market & general predictions

 

Here are predictions for growth of cloud from that same Gartner report, to keep looking (for a moment) just at the level of market expansion:

 

  • The total spending figures on cloud services in 2016 will show global expansion at 18.5% – reaching $260.2 billion.
  • Also during 2017, the amount designated for SaaS will rise 21%, hitting $58.6 billion by the New Year.
  • Cloud hosting or IaaS will show an almost ridiculous expansion of 36.6% in 2017. Achieving a total market of $34.7 billion through the end of the year, this segment is faster growing than any other type of cloud solution.

 

The basic point of the state of cloud and its projected rise is that this field is skyrocketing. However, a stronger takeaway will be the nature of this growth – understanding exactly what is involved, trends, and other forces that are influencing the area. We get a better sense of that broader perspective through highlights from the 2018 edition of Forrester’s annual cloud computing forecast:

 

#1 – Savvy businesses will become more conscientious about vendor lock-in as many cloud firms will continue to consolidate. (Note that consolidation is particularly prevalent with firms that do not have general hosting packages and other niche specializations).

 

#2 – Many software-as-a-service (SaaS) firms will offer their services through platforms as well as standalone.

 

#3 – For those companies headquartered outside of North America, cloud-based platforms will increasingly center their attention on the region, or on the specifications and concerns of a particular sector.

 

#4 – Open source containerization app Kubernetes will become pivotal for strong cloud environments.

 

#5 – Many firms will build clouds on-site, accelerating the use of hybrid and private forms of the technology.

 

#6 – Development of software and platforms will become a greater focus with private clouds as options start to extend more aggressively beyond hosting.

 

#7 – With the market becoming hotter for management systems, users will be able to get these environments for cloud either piecemeal or as a complimentary feature.

 

#8 – Fully 10% of traffic will be rerouted to cloud service providers and to colocation, away from carrier backbones.

 

#9 – Virtual reality or immersive software will become fundamental to increasing speed and generating trackable, sustainable improvements in development of applications.

 

#10 – The precept of Zero Trust security will near ubiquity within cloud environments as it is accepted as a core best practice. (Zero Trust is a relatively new way, or newly packaged way, in which to understand authentication and security within a site. This rule is sometimes compared to what is supposedly the traditional model for information technology: Trust But Verify. In the IDG publication CSO, security and compliance thought leader Robert C. Covington notes that the established model has been Trust OR Verify. With Zero Trust, the idea is to segment the network into components such as database, web, wireless, and LAN, and then to treat each of them as untrusted despite the fact that they are internal systems.)

 

Probably the most compelling forecast of all from Forrester is this summary comment: “In 2018, cloud computing will accelerate enterprise transformation everywhere [bold theirs] as it becomes a must-have business technology.”

 

Top 2018 cloud trends

 

The above assessment of the market and predictions in terms of how things are changing touched on a few trends, but now let’s look at top trends directly:

 

Colocation will become more broadly adopted.

 

IT chiefs are wanting to phase out their internal data centers, notes Clint Boulton of CIO. Some of that business goes to cloud, but a significant amount is being entrusted to colocation.

 

The use of a colocation center or colo (often a hosting service) makes it possible for a CIO to have its hardware and other infrastructure supported and maintained within a managed environment (which may have the added benefit of easy integration with cloud apps and hosting).

 

Dave Bartoletti, an analyst at Forrester, explains that deploying a multi-cloud plan is simpler in this context, and that testing different clouds can be as easy as possible without the need to start thinking about migration upfront.

 

Artificial intelligence and machine learning take center stage.

 

Cloud environments are developing incredibly based on the insights of artificial intelligence (AI) and machine learning – and these types of systems are available to users as cloud-based systems as well. Tools of this type will make it easier for businesses to analyze their data and make faster, smart business decisions.

 

Hyperconvergence will be embraced for private cloud.

 

Some companies, particularly in industries for which compliance is key, are still unsure about moving confidential data and mission-critical services to an outside entity. That remains the case even while many organizations now cite security concerns as a reason to move to cloud. (Thought leaders have suggested that public cloud is generally more secure than on-premise data centers. That belief is reflected in a 2016 survey of 210 information technology leaders. 50.8% of respondents said that the primary motivator behind the migration to public cloud was that the security of the cloud was stronger than in their internal data center.)

 

Again, despite that shift toward greater trust in cloud security, the risks of having an outside party control the systems that store and access your data can make people feel hesitant – so they turn toward private cloud instead. Putting together a private cloud may be more common now, but it is not as simple as it may sound, says Boulton. It can be challenging and costly to integrate all the necessary components of a strong environment to achieve the ends that we now need in cloud (virtualization, automation, resource tracking, self-service accessibility, standardization, etc.).

 

In order to make it easier to launch cloud, storage, processing, and networking resources are now prepackaged as hyperconverged infrastructure (HCI) plans. When you want to implement private cloud, Forrester suggests HCI, especially in cases for which speed and immediate scalability are paramount. These boxes are making it possible for firms to provision private clouds.

 

For further conversation about emergent and growing cloud trends, see the second half of this report, “Part 2: More 2018 Cloud Trends“.

 

The right cloud for your business

 

Are you considering your business’s approach to cloud for 2018? At Total Server Solutions, we believe a cloud-based solution should be scalable, reliable, fast, and easy to use. We do it right.

8 Mistakes People Make with WordPress

Posted by & filed under List Posts.

WordPress is not an umbrella technology used by the entire web – but it is pretty close. It underscores 29.0% of all sites assessed in the continually updated Web Technology Surveys market-share data.

 

As a tool, WordPress is a content management system (CMS) that simplifies website management. Although a CMS is fundamentally centered on content, functionality of the site is expanded through plugins, and design of the site is adjusted through the choice of site theme.

 

This platform is an extremely dominant brand within the CMS market, holding an incredibly 59.8% of the market share. CBS Local, CNN, NBC, the New York Post, TechCrunch, TIME, TED, and many other sites use WordPress to deliver their message and updates to their audience.

 

The fact that many people use WordPress also means that many mistakes are made by organizations as they are using the technology to build their sites. Here are 9 of the most common errors companies have made, presented here so that you can avoid them yourself:

 

#1 – Plugin overload

 

WordPress is often discussed in terms of its extraordinary flexibility – certainly at the level of its open source code but also at the simple level of quickly enhancing your functionality with plugins. As of this writing, there are 53,033 plugins. Since there are so many of these optional add-on programs, it can be easy to get excited and install many of them that are unnecessary. Here are three basic issues with excessive plugins:

 

  • Each of them is a security risk that may not be updated as often as you’d like;
  • Generally loading plugins will mean that your site is less lean and fast; and
  • When you update to a new release of WordPress, plugins can cause your site to break (which is why you need to back up before updating) – so the fewer of them, the better.

 

#2 – Retention of unused plugins

 

Get rid of plugins that you are not using, and verify that the plugin files are removed from your server. Plugins that a site is not actively using are an unguarded gate: if you are not using them, you probably are not updating them, so security holes arise.

 

#3 – Not backing up the site

 

We all have the to-do list items that we “backburner.” Do not let site backup be one of those backburner items.

 

WordPress developer Nathan Ello frames backup as insurance for web presence; and that is essentially what it is. It is unpleasant and may feel even a bit paranoid to consider the worst-case scenarios – but it is due-diligence that is essential to protection. If you do not have a backup and have not paid for your hosting, the files for your site will be at risk of disappearing (although treatment of these situations is better through more customer-centric hosts).

 

Beyond what is provided for backup through your arrangement with your host, you can also use a plugin such as BackupBuddy. BackupBuddy is the DIY option, effectively; all support and management could also be handled externally through your host. High-quality backup solutions are readily available to meet this need if you want to leverage the expertise of a specialized third party.

 

#4 – Thinking a child theme is unprofessional

 

When you first hear the idea of a child theme, it may sound like a look that is designed in a manner that is so incomprehensible, it must be separately explained to each person who views it: “No – that’s a horse. It’s a submarine!”

 

To understand child themes in context, WordPress sites use themes for the design. Themes are templates for the site – basically pieces of software that are added to the core WordPress code to make your site look and function in a particular way (still with full access to the open source code). The advantage of themes generally is that they allow you to make your site aesthetically pleasing without having to do anything at the level of the code to install and start using them.

 

While having access to themes is great, you will almost inevitably reach a point at which you want to customize to really make the site your own. Typically a person will hire a third party to make adjustments to their theme.

 

Once you have modified a them, you may feel all is well; but in the absence of a child theme, disaster is lurking. In that situation, when a new version of the theme comes out and populates as a Theme Update button within the admin portal, if you do not perform a backup prior to updating, you will “pave over” any tweaks that your paid developer made. That means if any part of your site has been changed by a developer to better suit you specifically, that code may be gone forever. At the very least, it may be missing until you can get it replaced by the coder – during which time your site will look prehistoric in comparison to its status prior to the update.

 

#5 – Failure to update to the latest WP version

 

You must be concerned about backing up before you update, yes. However, you MUST update. Updates are better for the speed of your site. They will make it function better, fully supporting all the latest versions of plugins and themes. Most importantly, though, the newest version of the WordPress core code will have all the latest security patches. Set up auto-updates or get management assistance if needed; both of these options are far better than neglecting to update, especially since old versions are such a common vulnerability exploited by hackers.

 

#6 – Skipping important aspects of customization

 

Customization is often incomplete or sloppy. Here are elements that often do not get enough attention, according to Laura Buckler in Torque:

 

  • Favicon – On your web browser, you will see a very small icon right next to the title of the page. The favicon is a powerful way to improve your branding. Try your logo or a modified version of it.
  • Permalinks – Every WordPress site has permalinks to systematize the URLs of pages and posts. Changing from the default structure will be helpful for your search engine presence; this tactic will also help reach on social platforms.
  • Administration – When you install WordPress, you may want to get it up and running immediately. However, it is not secure in the sense that it contains default credentials. Beyond the risk of data compromise, you also do not want to be responding to comments with the nondescript, unbranded username “admin.”
  • Tagline – Your elevator pitch or slogan is your tagline. Out of the box, the tagline for every WP installation is, “Just Another Blog.” The description is not exactly enticing and says more about lack of customization than anything else.

 

#7 – Category overload

 

Just like plugin overload is an issue, you can also end up with far too many categories. You want the categories to simply organize your primary topics.

 

Hierarchize this aspect: you should have categories and subcategories. Allow the categories to define the scope of content at the level of origination; a piece topic simply must fit within one of the categories or subcategories to be viable.

 

#8 – Overlooking infrastructure

 

Infrastructure, the “back end” of your site, is often overlooked. Consider this: speed is not only fundamental to engagement but has been a search ranking factor for almost a decade. The performance delivered by the hardware that actually responds to requests from users will be key in determining how strong the user experience is.

 

Beyond the equipment itself, you also may need help along the way. According to people who have used our high-performance services at Total Server Solutions, we are knowledgeable and quick to respond to support issues. See our testimonials.

How to Help Attorneys Embrace the Cloud

Posted by & filed under List Posts.

Helping attorneys use cloud-based solutions is about explaining why the technology is so valuable – that it has security, speed, access, and collaboration benefits for firms.

 

While just about every industry will end up using cloud computing environments, it’s growth has obviously been faster in some areas than others. For example, the cloud grew quickly with startups and SMBs, but it took longer for the technology to become popular in the enterprise. Cloud has increasingly become standard practice within healthcare. In fact, the Health and Human Services Department created an extremely thorough, wide-ranging, and fully cited document specifically dedicated to the topic (“Guidance on HIPAA & Cloud Computing).”

 

Another industry that has been more skeptical in terms of moving to cloud is law. For obvious reasons, law has extreme concerns in terms of protecting clients’ highly sensitive data to the greatest possible degree (as indicated by a datacenter whose infrastructure is verified and certified to meet the parameters of SSAE 16 compliance, short for the Statement on Standards for Attestation Engagements No. 16, a set of principles developed by the American Institute of Certified Public Accountants).

 

Since it seems that the transition to cloud is increasingly occurring for law firms, it makes sense to figure out how best to help them smoothly make the migration with confidence. Here are a few things it is good to let law firms know when they are considering transitioning to the cloud:

 

#1 – Adoption rates suggest lawyers want the convenience.

 

Today, more lawyers are using cloud than ever before. The technology is appreciated within law as it is within all the other fields: it is incredibly convenient and allows you to access your systems from anywhere you can get a web connection.

 

Part of the reason companies are adopting cloud is that bar associations are helping to further the understanding of how cloud can be used responsibly by attorneys through ethics opinions. Attorneys are taking advantage of cloud platforms for their website hosting and email server; for sharing of files to allow collaboration with internal and external partners; for backing up of HR details; to provide security against intrusion of your networks; to be able to access and manipulate files from a remote place; and to take work off the slate of your IT team (so they can focus on innovation rather than infrastructure and maintenance).

 

The numbers back up this idea that the distributed virtual network model is becoming central to law: an American Bar Association (ABA) study from 2016 reveals that at least one cloud service has now been adopted by 37.5% of attorneys. That same figure was at 31% in 2015 and 20% in 2014 – so clearly a transition to this form of computing continues to occur. Actually, other figures suggest that adoption by law firms is even more widespread: among firms that that are in the Am Law 200 and answered The American Lawyer’s 2015 Am Law-LTN Tech Survey, 51% of those 79 respondents said they had adopted cloud computing in some form.

 

#2 – Part of the reason the cloud has become so much more prevalent is that it is becoming recognized more widely as a secure choice.

 

There is more belief within the legal community that security and privacy are properly delivered within cloud atmospheres. Those law firms that feel security is now considered extremely solid within cloud computing are correct: thought leader David Linthicum calls people who are unsure about cloud technology the “folded arms gang.” In the piece, he convincingly suggests that security is better within cloud environments than it is within traditional on-premise data centers.

 

#3 – Cloud can be used to enhance your mobility.

 

The cloud allows you to deliver data seamlessly to smartphones and tablets as needed when you are away from your computer but want to maintain productivity throughout the day. Having your information in the cloud means that it is on a distributed virtual infrastructure (at a remote data center managed by a third party, if it’s a public or managed private cloud) rather than sitting behind a firewall. You are able to get information on-demand, just as your clients are. You are able to share or retrieve files between attorneys in a straightforward, simple, and efficient fashion. You are able to get data back and forth from one party to another without putting it at risk, both when you are sharing materials with clients and when you need to get it to litigation partners within the firm or other attorneys outside it – allowing you to do it right now rather than having to wait to get back to the office.

 

#4 – You don’t sink money into hardware that loses its value as you go.

 

If you spend the capital on your own data center for a traditional solution (whether dedicated or virtualized), you are investing in machines that will depreciate over time, gradually becoming obsolete. With the cloud, you do not need to buy the physical equipment – and that equipment is updated and maintained seamlessly over time at a cloud service provider. The cloud provider will manage the equipment. You will not have as big of a price tag upfront to start the system with cloud since that hardware is not needed, as noted in Law Technology Today. Basically, everything is handled behind the scenes, and you are unaware when updates are taking place.

 

#5 – Cloud lets onsite IT take a breath.

 

24/7 support is provided through a cloud provider, which can be extremely helpful to a firm that does not have a large IT department (which is true of most). Support that you will get from the CSP includes real-time oversight and checking of systems for active threats. Plus, they will manage the system to maximize the scalability of a plan so that resource distribution is meaningful and fits the needs of users. Service level agreements give attorneys a sense of what will be guaranteed from the provider in the areas of support and service.

 

Again, as indicated above, you can set up mission-critical cloud apps so that you are able to use your system anywhere you want. By getting faster access to your digital environment, you are better able to move quickly and achieve healthier work-life balance. With cloud systems trending toward greater mobile management, people will have an even simpler time working with their data and systems from any location. In turn, attorneys will better be able to work together to yield better results for all involved.

 

#6 – You get a platform that is better designed to leverage data analytics.

 

For your data to have value, you must analyze it. Law firms are increasingly adopting cloud so that they can better run analytics – with cloud tools that improve how they can use what they have at their fingertips for business intelligence, possibly improving their success rate at getting clients to work with them. Cloud systems may also reveal inefficiencies.

 

Launch legal cloud within an SSAE 16 compliant setting

 

Are you interested in setting up a cloud solution that meets the needs of your law firm? The SSAE 16 Type II Audit is your assurance that Total Server Solutions follows the best practices for data availability and security. See our SSAE 16 audit statement.

5 Top IoT Challenges

Posted by & filed under List Posts.

Underwriters Laboratories (UL), the 1894-founded certification and compliance company, has truly materialized the testing needs of the internet of things (IoT) era. The IoT – well, the consumer IoT at least – is about the interconnection of computing devices within everyday household objects. Since that’s the case, it would make sense that a strong testing ground for it would be a house.

 

Enter the UL “Living Lab.” The lab is a two-story residence that provides a space in which devices can interact within a real-world setting so that these environments can operate quickly and coherently, without security compromises or interoperability snags. At the Living Lab, people within the house use various IoT devices to verify that they function in accordance with one another and the external world.

 

A few of the factors that are of greatest concern to the UL researchers within this environment are typical ones that impact network and device performance:

 

  • Floor plan – how ceilings and walls might interfere with connection
  • Noise – the influence of ambient noise from residents or other “things”
  • Acoustic elements – the impact of furniture, drapes, rugs, and carpets
  • Other Wi-Fi – additional radiating devices, including nearby Wi-Fi networks, that interrupt your own system’s communications
  • IoT overload – bandwidth consumed by many different devices.

 

Essentially, this project by Living Lab is allowing them to uncover issues in a sort of “fishbowl” setting. From a more general perspective, the challenges of IoT technology can be understood through a framework provided by Ahmed Banafa of the University of California, Berkeley. His lenses through which technology of the IoT can be understood are security; connection; sustainability and compatibility; standards; and derivation of insights for intelligent action.

 

Security

 

Security is a central concern of the internet of things. With all the new nodes come new ways for hackers to find their ways into the network – especially since devices are often not built with strong security in mind (because the IoT is growing so rapidly now, with a focus placed more substantially on function than on data protection).

 

How critical is security to the IoT? Look no further than this November 8, 2017, headline by Charlie Osborne of ZDNet: “IoT devices are an enterprise security time bomb.” The evidence comes from Forrester Consulting. The analyst’s poll of 603 line-of-business and IT executives at large companies from six nations (including the US and UK) found that 82% of respondents said they would not necessarily be able to pass an audit since they were incapable of locating the devices on their networks that were either operational technology (OT) or internet of things.

 

Partially due to this lack of knowledge related to the technology, the stresses of the internet of thing are real as well, according to the survey. 54% of people reported that the IoT is a cause of stress: they feel unsure that it has the protection they need.

 

Curiously enough, the ZDNet piece reveals one of the problems holding back security: unsureness about the IoT itself. Companies typically were not investing large amounts in internet of things projects, in part because executives were still rather reserved on the topic. With tight budgets, 2 out of 5 staff members polled said that their organizations were using traditional tools to protect IoT systems.

 

“This is a glaring issue for today’s firms, which need crystal-clear visibility into networks where BYOD and IoT are common,” said Osborne.

 

Connection

 

Connectivity is another basic concern of the IoT that will push us beyond the server/client communication paradigm that we have used previously for node authorization and connection.

 

Server/client is a model that is well-suited to smaller numbers of devices. With the advent of the IoT, though, networks could require the integration of billions of devices, leading to bottlenecks in server/client scenarios. The new systems will be sophisticated cloud settings capable of sending and receiving massive amounts of information, scaling as needed.

 

“The future of IoT will very much depend on decentralizing IoT networks,” noted Banafa.

 

One way that decentralization is achieved is by transitioning certain tasks to the edge of the network, as with fog computing architectures that use hubs for mission-critical processing, with data collection and analytics through the cloud.

 

Sustainability / Compatibility

 

Currently, there are many different companies using various protocols that are trying to develop the standards for the internet of things. This free-market competition can boost innovation and options; however, additional software and hardware may be necessary in order to interconnect devices.

 

Disparities between operating systems, firmware, and machine to machine (M2M) protocols could all cause challenges in the IoT.

 

The reason that these two elements, sustainability, and compatibility, are discussed under the same heading is that the notion of compatibility is directly linked to the ability for a general ecosystem to survive long-term. Some technologies will inevitably become obsolete in the coming years, which could mean that their devices could become worthless. No one wants their refrigerator to become unusable a year or two after purchase because the manufacturer is no longer open for business.

 

Standards

 

Standards for data aggregation, networking, and communication will all help to determine processes for management, transmission, and storage of sensor data. Aggregation is critical because it improves data availability (via the frequency of access, scale, and scope) that you can use when analyzing. One concern that will make it harder to arrive at agreed standards within this field is the issue of unstructured data. Information within relational databases, called structured data, can be accessed via SQL. However, the unstructured contents within NoSQL databases are not accessed through one standard technique. Another issue is that companies may not have the skillsets they will need on-staff to be able to leverage and maintain cutting-edge big data systems.

 

One of the key reasons that standardization will be so helpful to the IoT is simply that it will make everything easier – as noted by Daniel Newman in Forbes. Currently, you cannot simply plug in a device. Instead, apps and drivers have to be installed. The technology should be simpler. Through APIs and open source technologies, IoT manufacturers will be able to integrate their devices with the worldwide ecosystem that already exists. “If these items use the same ‘language,'” said Newman, “they will be able to talk in ways they—and we—understand.”

 

Derivation of Insights for Intelligent Action

 

Finally, the IoT must have takeaways. Cognitive technologies are used in this setting to improve analysis and spark more powerful findings. Key trends related to this field include:

 

  • Lower cost of data storage: The volume of data that you have available will make it easier to get the results you want from an artificial intelligence (AI) system, especially since storage costs are lower than in the past.
  • More open source and crowdsourced analytics options: Algorithms are developing rapidly as cloud-based crowdsourcing has become prevalent.
  • Real-time analytics: You are able to get access to data that impacts your business “right now,” with real-time analysis through complex event processing (CEP) and other capabilities.

 

High-Performance Infrastructure for IoT

 

The above challenges are certainly not holding back the forward momentum of the internet of things. As it expands, strong and reliable cloud hosting will be fundamental to the success of individual projects.

 

Are you in need of a powerful cloud to back your IoT system? At Total Server Solutions, we engineered out cloud solution with speed in mind, and SSD lets us provide you with the high levels of performance that you demand. Get the only cloud with guaranteed IOPS.

6 Top E-Commerce Trends for 2018

Posted by & filed under List Posts.

It may seem to be old news to say that e-commerce is growing at a wild pace, but it continues to be the case heading into 2018. We can better understand just how fast e-commerce is growing by comparing it to other segments of the economy. A report from Kiplinger reveals that construction materials rose 8.0% in 2017 (partially due to hurricane damage) and restaurant revenue increased 3.3%. Including everything but gasoline, sales were generally up 3.8% overall. Now keep in mind that those are the bright points of the economy in terms of growth (not yet having touched on e-commerce).

 

In this same environment of nonexistent to relatively slow growth (with the exception of a segment – construction materials – boosted, of all things, by natural disaster recovery), e-commerce sales are growing 15%. That means it achieved a 15% rise for two consecutive years. Online sales have been consistently expanding for seven years now, noted Kiplinger; and the end result is that it will represent 9% of all retail revenue and 13% of all goods sold by the time 2017 comes to a close. Now, to really understand what’s going on, let’s compare to brick-and-mortar: in 2016, in-store purchases increased 1.4%, and in 2017, it is expected to rise 1.8%. In other words, e-commerce is growing 8.33 times as fast as brick-and-mortar.

 

Well, those are the numbers; and although huge growth is expected, that degree of rapid expansion is unchanged from last year. How are things changing and evolving, then? Here are top trends that will increasingly influence e-commerce efforts in 2018:

 

Trend #1 – Omni-platform & omni-device

 

You want people to have high-quality experiences regardless of the device – and that objective has already been met by many companies on-site through a focus on responsive design. The next step for a more seamless and consistent experience is integration across all devices and platforms – going beyond a presence on channels to having a fully integrated approach.

 

One key technique in deploying the general “omni” approach is cookie-containing ads, noted Kayla Matthews in Direct Marketing News (DMN). For example, if you put up an ad for football tailgating supplies on Google, the user could have ads for those types of supplies going through their Facebook feed the next day.

 

Trend #2 – Micro-moments become a more central concern

 

The notion of micro-moments is key in terms of how businesses approach mobile device use, according to Smart Insights. Micro-moments can be a way of considering every decision you make online. This concept refers to “highly critical and evaluative touchpoints where customers expect brands to cater to their needs with reliable information, regardless of the time and location,” said the marketing intelligence company.

 

Consider this fact for a second, and you will get a sense of exactly why this term is important. Incredibly, 24 out of every 25 people go to their phone immediately when they need to answer a question. If you want to answer the question the person has, think in terms of micro-moments and direct your online presence accordingly.

 

Trend #3 – Internet-based education / commoditization of video

 

Would you like to get up at 7am, head over to an auditorium, and watch an expert give a lecture? Maybe, but it does not sound like too much fun. However, you may be willing to watch one from home. There is much to be learned from sites such as Coursera and Udemy, which focus on self-improvement topics. “[T]hese are opportune moments to capitalize on this market,” according to Rotem Gal in Digital Commerce 360 (a.k.a. Internet Retailer) – which remains true in 2018 just as it did in 2017.

 

Think about what you can you do to get your content, and through it information and resources, to your potential customers. Gal specifically pointed to the functionality of Kajabi – a learning platform that allows you to have a more sophisticated marketing approach, with bells and whistles that allow you to better promote the specific instructor (landing pages, e-mail newsletter features, etc.).

 

Trend #4 – Increase the sophistication of your personalization approach

 

Personalizing a site has become pivotal in many different segments within e-commerce, ranging from finance to travel to retail.

 

In recent years, the price has gone down on these solutions, according to an additional piece by Smart Insights CEO Dave Chaffey. The options really are fairly diverse as this type of technology has matured; you can personalize the experience at the level of the content/commerce management system; as a tool integrated into your analytics software; or with a personalization app that you attach to your analytics platform or CMS.

 

Chaffey advocates using an experience personalization pyramid to think about strategizing in this direction – with personalization, segmentation, and optimization filling the top, middle, and bottom layers respectively.

 

Starting from the bottom, here is how the pyramid works:

 

  • Optimization – You could use split-testing (also called multivariate testing, A/B testing, and structured experiments). One way to move forward with this element is Google Optimize (but there are plenty of alternatives).
  • Segmentation – Figure out how to divide up your customers into targeted user groups, so you can specialize your content to meet each one. You will want to have different hands-on rules, and be careful that you don’t overdo segmenting. “[R]eturns for this approach eventually diminish after the maximum sustainable number of audience segments has been reached,” said Chaffey.
  • 1-to-1 personalization – In order for each customer to get an experience that is truly customized to them, use artificial intelligence (AI) that is capable of 1-to-1. In order to create a real 1-to-1 buying journey, it is necessary to tackle two issues that segmentation and optimization cannot: solving delays and scaling (as can be achieved by AI).

 

Trend #5 – Unleashing of the robots

 

We’ve discussed AI; now let’s get to the robots – on the rise through 2017 and 2018. These robots have arrived, and they would like to have a little chat. It only makes sense that chatbots would start to catch hold: you can automate them, control them, and at least distance yourself from human error.

 

The thing is that customer service questions should be met with considerable speed. More than three-quarters of people, 77%, told Forrester Research (according to social customer service SaaS Conversocial) that the most critical method that a company can use to treat them well is not wasting their time.

 

While robots are certainly imperfect, they do help you get to each customer faster — especially because in the context of e-commerce sales, you want to get your response time as close to “none” as you can.

 

Trend #6 – Leveraging a customer engagement protocol to introduce stronger content marketing

 

Content marketing is typically seen by marketers as one of the most important methods for introducing a product or service to prospects. Businesses are becoming savvier about using content as a resource. The key is to come up with content that is intended for different audiences and to figure out a customer engagement plan that covers various media. For the content itself, try personas and content mapping, advised Chaffey.

 

Conclusion

 

Are you wanting to improve e-commerce results? To deliver the speed that is so critical to online sales, you need strong infrastructure and access to broad resources. At Total Server Solutions, all of our high-performance hosting plans include UNLIMITED BANDWIDTH. See our e-commerce plans.

What is DCIM

Posted by & filed under List Posts.

There are many data centers dotting the landscape. They have been popping up all over the place – and that will continue. Worldwide, the market for data center construction was at $14.59 billion in 2014 – when it was forecast to rise at a compound annual growth rate (CAGR) of 9.3% to $22.73 billion by 2019.

 

Despite the incredible expansion in the number of data centers, there is actually good news when it comes to the amount of energy that is used by these facilities. In 2016, a landmark study was released – the first thorough analysis of American data centers in about 10 years. As reported in Data Center Knowledge, the study found that the demand for capacity skyrocketed between 2011 and 2016; but throughout that period, energy consumption hardly increased at all.

 

In 2014, the power fueling American data centers measured about the same as is used by 6.4 million residences annually: 70 billion kilowatt-hours. This finding suggests that electrical use at data centers rose just 4%. That’s nowhere near the rise between 2005 and 2010, when total power consumption grew by a shocking 24%. Actually, the percent increase was even more astronomical toward the beginning of the decade – 90%.

 

The amount of energy that is consumed by data centers would have grown much more aggressively if a focus on deploying efficiency improvements was not so fundamental to data center management in the last few years. In fact, the US Department of Energy study (in collaboration with Carnegie Mellon, Northwestern, and Stanford) looked directly at this issue by reframing 2014’s consumption in terms of 2010’s efficiency. If the efficiency level stayed at the same level, 2014 would have seen 40 billion more kWh consumed than in 2010.

 

For the period from 2010 to 2020, improvements to the efficiency of power consumption will be responsible for cutting power consumption by 620 billion kWh, noted the study’s authors. The report projected a 4% rise in data center consumption from 2016 through 2020 – expecting for resource consumption to continue at the same growth rate. If that forecast is correct, total consumption would hit 73 billion kWh by that point.

 

It is amazing how efficient we have become that data centers could be growing so fast but hardly needing to draw any additional power (proportionally). One way that these facilities have become more efficient is through the use of data center infrastructure management (DCIM) tools. What is DCIM? How is it being integrated with other steps to bolster data center efficiency?

 

The basics on DCIM

 

It may sound like data center infrastructure management is referring to how you place controls and protocols on the machines – but it is broader than that. As the nexus of information technology and facilities-related concerns, DCIM encompasses such areas as utility consumption, space planning, and hardware consolidation.

 

The inception of DCIM was as a piece of building information modeling (BIM) environments. Facilities managers implement BIM tools to generate schematic diagrams for any building. A DCIM program allows you to do what is possible with a BIM within the context of the data center. This software enables real-time analysis, collation, and storage of data related to your power consumption. You can print out diagrams as needed, making it easier to conduct maintenance or deploy new physical machines.

 

DCIM and 5 other ways to improve data center efficiency

 

Despite the fact that extraordinary strides have been made in recent years related to efficiency, power is still a huge part of the bill. In fact, according to an August 2015 study published in Energy Procedia, approximately 40 cents out of every dollar spent by data centers goes toward energy costs.

 

Plus, the 4% rise is just one analysis. Figures from Gartner suggest that electrical costs are actually increasing at about 10% annually.

 

Since energy consumption has become such an important priority for data centers, standards have developed to improve it systematically. One of the most critical standardized elements of efficiency efforts is a metric called power usage effectiveness (PUE). Interestingly, Gartner research director Henrique Cecci noted that PUE is helpful as a broad figure on the status of energy efficiency within the elements of the data center; however, it does not reveal the more granular concern of how efficient the IT hardware is.

 

Cecci noted that if you want to use power as efficiently as possible, you will make the most significant impact by optimizing the electrical consumption of your IT hardware. Here are six key steps he suggested to make your data center more energy-efficient:

 

Step 1 – Collect information.

 

Carefully monitor how much electricity you consume. Adjust as you go.

 

Step 2 – Make sure your IT systems are efficiently organized.

 

What will ultimately consume the electricity is the IT systems. For that reason, you want to reduce the payload power that is consumed by the machines. Actually, servers gobble up 60% of the payload power. So that they will not use as much power, you can:

 

  • Get rid of any unhelpful workloads.
  • Consolidate virtual environments.
  • Virtualize as many of your processes as possible.
  • Clear out machines that are not “justifying their existence.”
  • Get newer servers (since newer models are built with stronger efficiency technologies).

 

Step 3 – Make sure you are getting the most out of your space.

 

Data centers that were constructed in advance of the server virtualization era may have too much space, essentially, in terms of the hardware that is needed in the current climate. You can potentially improve your efficiency, then, with a new data center.

If you are designing a data center, an efficient approach is modular. That way you are sectioning the facility into these various modules, sort of like rooms of a house that can be revised and improved as units. Christy Pettey of Gartner called this approach to data center design “more flexible and organic.”

 

Step 4 – Improve the way you cool.

 

Cooling is a huge concern on its own, so it is important to use standardized methods such as:

 

  • Economizers – By implementing air economizers, you can garner a better PUE. Throughout most of North America, you should be able to get 40 to 90% of your cooling from the outside air if you use these devices.
  • Isolation – Contain the servers that are producing heat. Discard that heat from the data center, or (better yet) use it to heat other areas of the facility.
  • Fine-tune your A/C. There are a couple trusted ways to allow an air conditioning system to be as efficient as possible. One is to shut it down occasionally, switching to a secondary cooling system such as an air optimizer. The other option is to fluctuate the air conditioning system’s speed as you go to lower total energy consumption.

 

Step 5 – Replace any inefficient equipment.

 

Your PUE can also be negatively impacted by power delivery systems that have been deployed for some time – such as transformers, power distribution units (PDUs), and uninterruptible power supplies (UPSs). Assess these systems regularly, and refresh as needed.

 

Step 6 – Implement DCIM software.

 

Launching a DCIM program will give you a huge amount of insight to become even more efficient. Pettey actually makes a comment that relates to that notion of DCIM being a nexus of IT and facilities-related concerns (above). “DCIM software provides the necessary link between the operational needs of the physical IT equipment and the physical facilities (building and environment controls),” she said.

 

Your partner for high-performance infrastructure

 

Do you need a data center that operates as efficiently as possible? A critical aspect of efficiency is the extent to which components are integrated. At Total Server Solutions, each of our services is engineered to work with our other products and services to bring to life a highly polished, well-built hosting platform. We build solutions.

The IoT Challenge

Posted by & filed under List Posts.

During 2017, there will be 8.4 billion objects connected to the internet. That’s a 31% rise over 2016 numbers, and this figure is still headed north – with Gartner predicting there will be 20.4 billion IoT devices by 2020. The cost of connected devices and services will be $2 trillion this year – with 63% of the IoT made up by consumer endpoints. This data suggests how fast the IoT is growing, which points to how disruptive the technology is and the extent to which it will broadly impact our lives.

 

With incredible growth comes incredible opportunity, so the Internet of Things should certainly be viewed in terms of its possibility. However, it is also a sticky subject that is giving enormous headaches to IT professionals from the standpoints of connection and security (although its security has been improved somewhat by advances in the cloud servers that typically form its basis). Before we get into that, let’s better understand the landscape by looking at the two main branches of IoT.

 

Consumer vs. industrial IoT – what’s the distinction?

 

While the consumer internet of things is the primary point of focus in the media, the industrial internet of things (IIoT) is also a massive game-changer, allowing a way to seamlessly track and continually improve processes. Let’s look at 5 primary differences between these two branches of IOT:

 

  • The IIoT need to be much more durable, depending on the conditions where they will be deployed. Think about the difference between a Fitbit and an IoT sensor that must be submerged within oil or water in order to measure its flow rate; the latter device has to meet the specifications of the IP68 standard, while a Fitbit does not.

 

  • Industrial devices must be built with scalability in mind. While home automation is perhaps the most complex consumer project, an industrial project can involve thousands of midpoints and endpoints across hundreds of miles.

 

  • “Things” within the IIoT are often gauging the system in areas where it is largely inaccessible. For instance, it may be underneath the ground (as with gas and oil pipes), at a high point (as with a water reservoir), out in the ocean (as with offshore drilling), or in the middle of the desert (as with a weather station).

 

  • The Industrial internet of things faces the same general security threat as the consumer internet does. It gets more concerning with the IIoT because a consumer hack (such as someone infiltrating a smart home) is local. An industrial scenario, on the other hand, can be much more devastating since many of these installations are sensors used to facilitate processes at water treatment and power plants.

 

  • The level of granularity and customization with industrial applications is higher. While a smart refrigerator might have relatively complex capabilities, it is fairly standard for IIoT devices to need to be adapted in order to meet the specific needs of the manufacturer that is ordering them.

 

Two big potential hurdles of the IoT

 

What are some of the biggest things holding back the internet of things? Connection and security. Let’s look at these two major potentially problematic elements.

 

Connection

 

The Internet of Things can only reveal its true power when there are a sufficient number of devices connected – and that itself is an issue. One of the primary concerns with cloud computing as it started to accelerate was a lack of established standards, and the same is currently true with IoT as its continues to develop. IoT manufacturers and services have many different specialties that reach out both horizontally (the variety of different capabilities) and vertically (throughout various sectors).

 

There are a vast number of companies that are operating within the Internet of Things, and the huge tech companies are running numerous types of systems. Part of the problem is that because IoT growth is so fast-paced, independent development is prioritized over the interoperability that will create a truly stable environment.

 

The lack of operability within the field can be understood in terms of raw competition. In the absence of a set of established standards, each individual firm is left to create its own. That itself represents a huge opportunity, as everyone knows. Everyone likes its own version and wants that one to be the accepted standard. Proprietary systems are getting the focus since everyone wants to be “the OS of IoT.”

 

The good news is that this problem is being addressed. One example is the Living Lab of certification, validation, testing, and compliance firm Underwriters Laboratories (UL). The lab is simply a two-story home that offers a real-world scenario in which interoperability can be studied.

 

Since we live in the era before established IoT standards, we have many different options from which to choose when we create systems. You get a sense for what a jungle the situation is by looking at the range of networking options. Examples of technologies that each has its own technical standards are 6LowPAN, Alljoyn, Bluetooth, Bluetooth LE, cellular, CoAP, Homekit, JSON-LD, MQTT, Neul, NFC, Sigfox, Weave, Wi-Fi, and Z-wave. A device might operate correctly through some of these networking technologies but not the others; that is an interoperability issue.

 

What makes the interoperability issue immediately complex even at the level of the network is that the different communication protocols operate within different stack layers. Some of the networking methods are radio communication, while others are data protocols or engage at the transport layer. Homekit is practically its own operating system. Some of the protocols interact at more than one layer.

 

What’s good about this? From one internet of things project to another, you can use a very different set of technologies. Companies can get creative in their implementations. For example, Anticimex, which offer pest control services in Sweden, shoots messages from its smart traps through a carrier network to an SMS system and, in turn from there, to a control center. By setting up the relay system in this manner, Anticimex is able to isolate the vast majority of problems at the trap since there is not a direct connection from it into their system.

 

Security

 

Another primary challenge facing the IoT (which is, in many ways, related to interoperability) is security. “So many new nodes being added to networks and the internet will provide malicious actors with innumerable attack vectors and possibilities to carry out their evil deeds,” explained IoT thought-leader Ahmed Banafa, “especially since a considerable number of them suffer from security holes.”

 

There is a twofold danger to IoT projects, both components of which are related to the endpoints – largely because it is challenging to secure small, simply engineered devices (as suggested by Banafa’s comments).

 

One of the problems that can arise is that a device that is breached can be used as a window into your system by a cybercriminal. Any endpoint is an attack vector waiting to happen.

 

The other primary problem that can arise is that an exploited device does not have to immediately be used against you. A hacked device can be recruited into a massive botnet of exploited IoT devices. The most large-scale problem of this sort is Mirai and its offshoots. Through Mirai, huge amounts of security cameras, routers, and smart thermostats were used to down some of the largest sites in the world in 2016, along with that of security researcher Brian Krebs.

 

Infrastructure for your IoT project

 

As you might have guessed when you first saw the title, the IoT is not just a promise or just a challenge, but both. Like any huge area of opportunity, it is about overcoming the challenges to see the results that are only available in relatively undeveloped territory.

 

Are you considering an internet of things implementation? At Total Server Solutions, get the only cloud with guaranteed IOPS. Your IoT cloud starts here.

choosing a CMS -- the issue of control

Posted by & filed under List Posts.

Content management systems (CMSs) are certainly popular. It would not be accurate to say that they are Internet-wide, but they are prevalent enough to be considered a core technology for web development – whether you are simply using the systems and their associated software “as is” or are customizing them.

 

Before we talk about ideas for making the choice of a CMS, let’s look at context by talking a little about the history of this technology. Finally, we will close by touching briefly on infrastructure since that is also an important piece.

 

  • A brief history of the CMS
  • How to choose a CMS – 3 basic steps
  • High-performance cloud hosting to drive your site

 

A brief history of the CMS

 

It helps to put a technology into perspective by looking at its history. For the content management system, the best place to start is the 1990s, as indicated by a short history put together by Emory University web development instructor Ivey Brent Laminack.

 

In the mid-90s, developers were still having difficulty getting proper display for their HTML pages. E-commerce sites were just about the only dynamic pages. Coders who were working to help build e-commerce sites were using ColdFusion or Perl. There was not yet a real established basis for online transactions or for the integrated management of content.

 

The web continued to progress of course (as it has been known to do). By the late-90s, there were languages such as PHP that were a better fit for the Internet. Industry professionals were beginning to realize that it was actually a wise idea to allow owners of websites to update and manage their own content. Because of this increased understanding that website owners needed that type of access (in other words, that this type of tool would have value), coders started writing content management systems; and, in turn, the CMS became a prevalent technology. The CMS made it possible for users to bring images from their own desktop computers online; create informational pieces and narratives; and boost the general engagement of web pages.

 

Things have changed quite a bit since the late-90s though. Initially, coders were coming up with their own software. That was basically the emergence of the custom CMS. The diversity at that time was nice, but people like ecosystems that can be standardized to an extent – and the business world wanted to monetize these types of systems. Hence, firms were created to build, sell, and support content management systems.

 

Some web-based CMSs were actually derived from document management systems. These systems were a way to keep a handle on all the files within a desktop, such as word-processing documents, presentations, and spreadsheets. Those systems were starting to get more widely used at about the turn of the millennium. Document management system software was particularly useful to large newspapers and magazines; in those organizations, full adoption was typically a six-figure project. Soon after the turn of the millennium, open source CMS choices started to become available and proliferate. Mambo and Drupal were two of the chief ones in the early years.

 

“For the first few years, they were only marginally useful,” noted Laminack, “but by about 2004, they were starting to be ready for prime-time.”

 

As emergence years for each type of CMS, Laminack lists 1997 for the custom CMS, 2000 for the proprietary CMS, and 2004 for the open source CMS.

 

How to choose a CMS – 3 basic steps

 

There are plenty of articles online arguing for one CMS over another. (As we know, WordPress has plenty of adherents, which is why it is the king of the market.) This is not a popularity contest, though. Let’s look at advice for choosing the best CMS:

 

1.) Consider why you want a new CMS.

 

As you start the process, think about your own goals. Consider the problems you want to address with the technology. Is there something about your current system that you particularly don’t like? Think about the negatives, too, advised business intelligence software-as-a-service firm Siteimprove. Are there elements of your current environment that you really want to leave behind? Try making a Requirements Matrix (aka a/n Features or Evaluation Matrix), to get a better sense of how well the different CMSs measure up against one another.

 

2.) Prioritize, above all else, usability and control.

 

The most important two things you want to look for in a CMS, as a general rule, are user-friendliness and the extent to which you have control, according to Chicago-based website design firm Intechnic. These two elements are intertwined. You want it to be easy to make updates to content; publish at the times you want; updates important parts of your site such as the terms of service; and create new pages. You need to have the control to be able to easily complete these types of tasks; they are central to the role of a content management system.

 

It is common for a CMS not to support many elements that you want to be able to integrate into your site. “This is unacceptable,” said Intechnic. “A good CMS needs to adapt to your business’ standards, processes, and not the other way around.”

 

3.) Look for other key attributes of a strong environment.

 

CMS Critic discussed the topic of CMS selection in terms of the characteristics that a user should want in one – and that they should see are present when they’re exploring options:

 

  • Usability: Note that this feature, discussed above as one of the pair that should be the underpinning of a CMS choice (per Intechnic), is listed first by CMS Critic.
  • Mobile-friendliness: You need a CMS that offers strong mobile capabilities since access from phones and tablets is now so much of the whole pie.
  • Permissions and workflow: There is an arc to content, running from its production to its editing, management, and auditing. A good CMS program will give you the ability to create workflows and otherwise simplify content management.
  • Templates: You want a system that has the ability to easily create templates. These templates should make it simple to copy content and to reuse the same structural format.
  • Speed and capacity to grow: The system you choose should have great performance (both strong reliability and high speed), along with scalability related to that performance so that you will not hit a wall as you grow.
  • Great search engine tools and on-site searchability: One of the most critical aspects of a CMS is its ability to get your message to potential customers through the search engines. Make sure the CMS offers tools, as with plugins, that can boost your SEO. You also want the ability to have visitors to your site search your site for great open-ended navigability.
  • Deployment agility: You want it to be possible to serve the CMS either on your own server or in an external data center (including cloud).
  • Broad & robust support and service: You want to know that you can get support and service, whether through the CMS provider or through the broader tech community.

 

High-performance cloud hosting to drive your site

 

Are you deciding on a CMS for your business? Beyond the process of figuring out the CMS that makes sense, you also need to figure out its hosting.

 

At Total Server Solutions, our cloud hosting boasts the highest levels of performance in the industry. See our High Performance Cloud Platform.

What Is Data Infrastructure

Posted by & filed under List Posts.

Where is your information? Would you describe it as being in your infrastructure or your data infrastructure? Let’s look at what data infrastructure is, why it is important, and the specific characteristic of reliability availability serviceability (RAS). Then we will close by reviewing some common issues and advice.

 

When you use the Internet, whether for business or personal reasons, you are fundamentally reliant upon data infrastructures. A data infrastructure is a backend computing concept (it is backend computing, essentially), so it is understandable that it is often called by names that don’t fit it quite as well – such as simply infrastructure, or the data center.

 

Infrastructure is a term that is used for the set of tools or systems that support various professional or personal activities. An obvious example at the public level, in terms of infrastructure that is maintained by the government, is the roads and bridges. These elements are the basic structures through which people can store, contain, or transfer themselves, products, or anything else – allowing them to get things where they otherwise couldn’t.

 

Infrastructure is, in a sense, support to allow for the possibilities of various functions considered essential to modern society. In the case of IT services specifically, you could think of all technological components that underlie that tool as infrastructure, noted Greg Schulz in Network World.

 

The basic issue is that infrastructure is an umbrella term for these functional, supportive building blocks. There are numerous forms of infrastructure that must be incorporated within an information technology (IT) ecosystem, which are best understood as layers. The top layer is business infrastructure (the key environments used to run your business). Beneath that layer is the information infrastructure, the software, and platforms that allow the business systems to be maintained and developed. Finally, beneath the information infrastructure is the data infrastructure, as well as the actual data centers or technological habitats. These physical environments can also be supported by outside infrastructure, especially networking channels and electricity.

 

Understanding the context in which data infrastructure (ranging from cloud hosting to traditional onsite facilities) exists, let’s explore the idea in its own right.

 

If you think of a transportation infrastructure as the components that support transportation (the roads and bridges) and a business infrastructure as the tools and pieces that support business interactions directly, you can think of data infrastructure as the equipment or parts that are there to support data: safeguarding it, protecting it from destruction, processing it, storing it, transferring it, and sending it – along with the programs for the provision of computing services. Specific aspects of data infrastructure are physical server machines, programs, managed services, cloud services, storage, networking, staff, and policies; it also extends from cloud to containers, from legacy physical systems to software-defined virtual models.

 

Purpose: to protect and to serve (the data)

 

The whole purpose of your data infrastructure is to be there for your data as described above – protecting it and converting it into information. Protection of the data is a complex task that includes such concerns as archiving; backup and restore; business continuity and business resiliency (BC/BR); disaster recovery (DR); encryption and privacy; physical and logical security; and reliability availability serviceability (RAS).

 

It will often draw some amount of attention when a widely used data infrastructure or application environment goes down. Recent outages include the Australian Tax Office, Gitlab, and Amazon Web Services.

 

The troubling thing about the downtime incidents that have been seen in these high-profile scenarios, as well as ones that were not covered as much outside of security circles, is that they are completely avoidable with the right safeguards at the level of the software and the data.

 

A large volume of disasters and other unplanned downtime could be reduced or eliminated altogether, said Schulz in a separate Network World piece. “[I]f you know something can fail,” he said, “you should be able to take steps to prevent, isolate and contain problems.”

 

Be aware that there is always a possibility of error, and any technological solution can experience a fault. People worry about the machines. Oh, it is easy to point fingers! However, the greatest areas of vulnerability are the situations in which humans are determining and controlling setup of computers, applications, plans, and procedures.

 

What is the worst-case scenario? If data loss occurs, it can be complete or partial. Features of data protection are both vast and granular, having to incorporate concerns related to diversely distributed locations, data center facilities, platforms, clusters, cabinets, shelves, and single pieces of hardware or software.

 

What is RAS?

 

Why must your data infrastructure have RAS? Reliability, availability, and serviceability is a trio of concerns that are used to architect, build, produce, buy, or implement an IT part. This concept was originally “deployed” by IBM when the company wanted to come up with standards for their mainframes; at that point, it was only a characteristic pertinent to hardware. Now RAS is also used to describe applications, networks, and other systems as well.

 

  • Reliability – the capacity of a component of hardware or software to meet the specifications its manufacturer or provider describes.
  • Availability – the amount of time that a computing part or service works compared to the entire time that the user expects it to work, expressed as a ratio.
  • Serviceability – the extent to which a piece of a computing environment is accessible and modifiable so that fixes and maintenance can occur.

 

Top issue #1: letting software take the lead

 

Both data infrastructures and threat landscapes are increasingly software-defined. Today is the era of the rise of the software-defined data infrastructure (SDDI) and software-defined data center (SDDC). In this climate, it is good to start addressing the field of software-defined data protection – which can be used to allow for better availability of your data infrastructure, along with its data and programs.

 

In today’s climate, the data infrastructure, its software, and its data are all at risk from software-defined as well as traditional threats. The “classic” legacy problems that might arise are still a massive risk; they include natural disaster, human error, glitches in programs, and problems with application setups. Software-defined issues run the spectrum from spyware, ransomware, and phishing to distributed denial of service (DDoS) and viruses.

 

Top issue #2 – getting ahead of the curve

 

We should be hesitant to shrug it off when there is a major outage. We should ask hard questions, especially the big one: “Did the provider lower operational costs at the expense of resiliency?”

 

4 tips for improved data infrastructure

 

Here is how to make your data infrastructure stronger, in 4 snippets of advice:

 

  1. You will want to consider the issue of resiliency not just in terms of cost but in terms of benefits – evaluating each of your systems in this manner. The core benefit is insurance against outage.
  2. Create duplicate copies of all data, metadata, keys, certificates, applications, and other elements (whether the main system is run third-party or in-house). You also want backup DNS so you cannot effectively be booted from the internet.
  3. Data loss prevention starts at home, with a decision to invest in RAS or to lower your costs. You should also vet your providers to make sure that they will deliver. Rather than thinking of data protection as a business expense, reframe it as an asset – and present it that way to leadership.
  4. Make sure that data that is protected can also be quickly and accurately restored.

 

Summary/Conclusion

 

Simply by thinking of your programs and data on a case-by-case basis, as well as implementing strategies (such as deduping, compression, and optimized backups) to minimize your data footprint, you will spend less money while creating better redundancy. To meet this need, your data infrastructure should be as resilient as possible – but also be attached to incredible support so you can quickly adapt.

 

Are you rethinking your data infrastructure? At Total Server Solutions, we believe strongly in our support – but, as they say, don’t just take our word for it. From drcreations in Web Hosting Talk: “Tickets are generally responded to in 5-10 mins (normally closer to 5 mins) around the clock any day. It’s true 24/7/365 support.”

 

We’re different. Here’s why.