What is DCIM

Posted by & filed under List Posts.

There are many data centers dotting the landscape. They have been popping up all over the place – and that will continue. Worldwide, the market for data center construction was at $14.59 billion in 2014 – when it was forecast to rise at a compound annual growth rate (CAGR) of 9.3% to $22.73 billion by 2019.

 

Despite the incredible expansion in the number of data centers, there is actually good news when it comes to the amount of energy that is used by these facilities. In 2016, a landmark study was released – the first thorough analysis of American data centers in about 10 years. As reported in Data Center Knowledge, the study found that the demand for capacity skyrocketed between 2011 and 2016; but throughout that period, energy consumption hardly increased at all.

 

In 2014, the power fueling American data centers measured about the same as is used by 6.4 million residences annually: 70 billion kilowatt-hours. This finding suggests that electrical use at data centers rose just 4%. That’s nowhere near the rise between 2005 and 2010, when total power consumption grew by a shocking 24%. Actually, the percent increase was even more astronomical toward the beginning of the decade – 90%.

 

The amount of energy that is consumed by data centers would have grown much more aggressively if a focus on deploying efficiency improvements was not so fundamental to data center management in the last few years. In fact, the US Department of Energy study (in collaboration with Carnegie Mellon, Northwestern, and Stanford) looked directly at this issue by reframing 2014’s consumption in terms of 2010’s efficiency. If the efficiency level stayed at the same level, 2014 would have seen 40 billion more kWh consumed than in 2010.

 

For the period from 2010 to 2020, improvements to the efficiency of power consumption will be responsible for cutting power consumption by 620 billion kWh, noted the study’s authors. The report projected a 4% rise in data center consumption from 2016 through 2020 – expecting for resource consumption to continue at the same growth rate. If that forecast is correct, total consumption would hit 73 billion kWh by that point.

 

It is amazing how efficient we have become that data centers could be growing so fast but hardly needing to draw any additional power (proportionally). One way that these facilities have become more efficient is through the use of data center infrastructure management (DCIM) tools. What is DCIM? How is it being integrated with other steps to bolster data center efficiency?

 

The basics on DCIM

 

It may sound like data center infrastructure management is referring to how you place controls and protocols on the machines – but it is broader than that. As the nexus of information technology and facilities-related concerns, DCIM encompasses such areas as utility consumption, space planning, and hardware consolidation.

 

The inception of DCIM was as a piece of building information modeling (BIM) environments. Facilities managers implement BIM tools to generate schematic diagrams for any building. A DCIM program allows you to do what is possible with a BIM within the context of the data center. This software enables real-time analysis, collation, and storage of data related to your power consumption. You can print out diagrams as needed, making it easier to conduct maintenance or deploy new physical machines.

 

DCIM and 5 other ways to improve data center efficiency

 

Despite the fact that extraordinary strides have been made in recent years related to efficiency, power is still a huge part of the bill. In fact, according to an August 2015 study published in Energy Procedia, approximately 40 cents out of every dollar spent by data centers goes toward energy costs.

 

Plus, the 4% rise is just one analysis. Figures from Gartner suggest that electrical costs are actually increasing at about 10% annually.

 

Since energy consumption has become such an important priority for data centers, standards have developed to improve it systematically. One of the most critical standardized elements of efficiency efforts is a metric called power usage effectiveness (PUE). Interestingly, Gartner research director Henrique Cecci noted that PUE is helpful as a broad figure on the status of energy efficiency within the elements of the data center; however, it does not reveal the more granular concern of how efficient the IT hardware is.

 

Cecci noted that if you want to use power as efficiently as possible, you will make the most significant impact by optimizing the electrical consumption of your IT hardware. Here are six key steps he suggested to make your data center more energy-efficient:

 

Step 1 – Collect information.

 

Carefully monitor how much electricity you consume. Adjust as you go.

 

Step 2 – Make sure your IT systems are efficiently organized.

 

What will ultimately consume the electricity is the IT systems. For that reason, you want to reduce the payload power that is consumed by the machines. Actually, servers gobble up 60% of the payload power. So that they will not use as much power, you can:

 

  • Get rid of any unhelpful workloads.
  • Consolidate virtual environments.
  • Virtualize as many of your processes as possible.
  • Clear out machines that are not “justifying their existence.”
  • Get newer servers (since newer models are built with stronger efficiency technologies).

 

Step 3 – Make sure you are getting the most out of your space.

 

Data centers that were constructed in advance of the server virtualization era may have too much space, essentially, in terms of the hardware that is needed in the current climate. You can potentially improve your efficiency, then, with a new data center.

If you are designing a data center, an efficient approach is modular. That way you are sectioning the facility into these various modules, sort of like rooms of a house that can be revised and improved as units. Christy Pettey of Gartner called this approach to data center design “more flexible and organic.”

 

Step 4 – Improve the way you cool.

 

Cooling is a huge concern on its own, so it is important to use standardized methods such as:

 

  • Economizers – By implementing air economizers, you can garner a better PUE. Throughout most of North America, you should be able to get 40 to 90% of your cooling from the outside air if you use these devices.
  • Isolation – Contain the servers that are producing heat. Discard that heat from the data center, or (better yet) use it to heat other areas of the facility.
  • Fine-tune your A/C. There are a couple trusted ways to allow an air conditioning system to be as efficient as possible. One is to shut it down occasionally, switching to a secondary cooling system such as an air optimizer. The other option is to fluctuate the air conditioning system’s speed as you go to lower total energy consumption.

 

Step 5 – Replace any inefficient equipment.

 

Your PUE can also be negatively impacted by power delivery systems that have been deployed for some time – such as transformers, power distribution units (PDUs), and uninterruptible power supplies (UPSs). Assess these systems regularly, and refresh as needed.

 

Step 6 – Implement DCIM software.

 

Launching a DCIM program will give you a huge amount of insight to become even more efficient. Pettey actually makes a comment that relates to that notion of DCIM being a nexus of IT and facilities-related concerns (above). “DCIM software provides the necessary link between the operational needs of the physical IT equipment and the physical facilities (building and environment controls),” she said.

 

Your partner for high-performance infrastructure

 

Do you need a data center that operates as efficiently as possible? A critical aspect of efficiency is the extent to which components are integrated. At Total Server Solutions, each of our services is engineered to work with our other products and services to bring to life a highly polished, well-built hosting platform. We build solutions.

The IoT Challenge

Posted by & filed under List Posts.

During 2017, there will be 8.4 billion objects connected to the internet. That’s a 31% rise over 2016 numbers, and this figure is still headed north – with Gartner predicting there will be 20.4 billion IoT devices by 2020. The cost of connected devices and services will be $2 trillion this year – with 63% of the IoT made up by consumer endpoints. This data suggests how fast the IoT is growing, which points to how disruptive the technology is and the extent to which it will broadly impact our lives.

 

With incredible growth comes incredible opportunity, so the Internet of Things should certainly be viewed in terms of its possibility. However, it is also a sticky subject that is giving enormous headaches to IT professionals from the standpoints of connection and security (although its security has been improved somewhat by advances in the cloud servers that typically form its basis). Before we get into that, let’s better understand the landscape by looking at the two main branches of IoT.

 

Consumer vs. industrial IoT – what’s the distinction?

 

While the consumer internet of things is the primary point of focus in the media, the industrial internet of things (IIoT) is also a massive game-changer, allowing a way to seamlessly track and continually improve processes. Let’s look at 5 primary differences between these two branches of IOT:

 

  • The IIoT need to be much more durable, depending on the conditions where they will be deployed. Think about the difference between a Fitbit and an IoT sensor that must be submerged within oil or water in order to measure its flow rate; the latter device has to meet the specifications of the IP68 standard, while a Fitbit does not.

 

  • Industrial devices must be built with scalability in mind. While home automation is perhaps the most complex consumer project, an industrial project can involve thousands of midpoints and endpoints across hundreds of miles.

 

  • “Things” within the IIoT are often gauging the system in areas where it is largely inaccessible. For instance, it may be underneath the ground (as with gas and oil pipes), at a high point (as with a water reservoir), out in the ocean (as with offshore drilling), or in the middle of the desert (as with a weather station).

 

  • The Industrial internet of things faces the same general security threat as the consumer internet does. It gets more concerning with the IIoT because a consumer hack (such as someone infiltrating a smart home) is local. An industrial scenario, on the other hand, can be much more devastating since many of these installations are sensors used to facilitate processes at water treatment and power plants.

 

  • The level of granularity and customization with industrial applications is higher. While a smart refrigerator might have relatively complex capabilities, it is fairly standard for IIoT devices to need to be adapted in order to meet the specific needs of the manufacturer that is ordering them.

 

Two big potential hurdles of the IoT

 

What are some of the biggest things holding back the internet of things? Connection and security. Let’s look at these two major potentially problematic elements.

 

Connection

 

The Internet of Things can only reveal its true power when there are a sufficient number of devices connected – and that itself is an issue. One of the primary concerns with cloud computing as it started to accelerate was a lack of established standards, and the same is currently true with IoT as its continues to develop. IoT manufacturers and services have many different specialties that reach out both horizontally (the variety of different capabilities) and vertically (throughout various sectors).

 

There are a vast number of companies that are operating within the Internet of Things, and the huge tech companies are running numerous types of systems. Part of the problem is that because IoT growth is so fast-paced, independent development is prioritized over the interoperability that will create a truly stable environment.

 

The lack of operability within the field can be understood in terms of raw competition. In the absence of a set of established standards, each individual firm is left to create its own. That itself represents a huge opportunity, as everyone knows. Everyone likes its own version and wants that one to be the accepted standard. Proprietary systems are getting the focus since everyone wants to be “the OS of IoT.”

 

The good news is that this problem is being addressed. One example is the Living Lab of certification, validation, testing, and compliance firm Underwriters Laboratories (UL). The lab is simply a two-story home that offers a real-world scenario in which interoperability can be studied.

 

Since we live in the era before established IoT standards, we have many different options from which to choose when we create systems. You get a sense for what a jungle the situation is by looking at the range of networking options. Examples of technologies that each has its own technical standards are 6LowPAN, Alljoyn, Bluetooth, Bluetooth LE, cellular, CoAP, Homekit, JSON-LD, MQTT, Neul, NFC, Sigfox, Weave, Wi-Fi, and Z-wave. A device might operate correctly through some of these networking technologies but not the others; that is an interoperability issue.

 

What makes the interoperability issue immediately complex even at the level of the network is that the different communication protocols operate within different stack layers. Some of the networking methods are radio communication, while others are data protocols or engage at the transport layer. Homekit is practically its own operating system. Some of the protocols interact at more than one layer.

 

What’s good about this? From one internet of things project to another, you can use a very different set of technologies. Companies can get creative in their implementations. For example, Anticimex, which offer pest control services in Sweden, shoots messages from its smart traps through a carrier network to an SMS system and, in turn from there, to a control center. By setting up the relay system in this manner, Anticimex is able to isolate the vast majority of problems at the trap since there is not a direct connection from it into their system.

 

Security

 

Another primary challenge facing the IoT (which is, in many ways, related to interoperability) is security. “So many new nodes being added to networks and the internet will provide malicious actors with innumerable attack vectors and possibilities to carry out their evil deeds,” explained IoT thought-leader Ahmed Banafa, “especially since a considerable number of them suffer from security holes.”

 

There is a twofold danger to IoT projects, both components of which are related to the endpoints – largely because it is challenging to secure small, simply engineered devices (as suggested by Banafa’s comments).

 

One of the problems that can arise is that a device that is breached can be used as a window into your system by a cybercriminal. Any endpoint is an attack vector waiting to happen.

 

The other primary problem that can arise is that an exploited device does not have to immediately be used against you. A hacked device can be recruited into a massive botnet of exploited IoT devices. The most large-scale problem of this sort is Mirai and its offshoots. Through Mirai, huge amounts of security cameras, routers, and smart thermostats were used to down some of the largest sites in the world in 2016, along with that of security researcher Brian Krebs.

 

Infrastructure for your IoT project

 

As you might have guessed when you first saw the title, the IoT is not just a promise or just a challenge, but both. Like any huge area of opportunity, it is about overcoming the challenges to see the results that are only available in relatively undeveloped territory.

 

Are you considering an internet of things implementation? At Total Server Solutions, get the only cloud with guaranteed IOPS. Your IoT cloud starts here.

choosing a CMS -- the issue of control

Posted by & filed under List Posts.

Content management systems (CMSs) are certainly popular. It would not be accurate to say that they are Internet-wide, but they are prevalent enough to be considered a core technology for web development – whether you are simply using the systems and their associated software “as is” or are customizing them.

 

Before we talk about ideas for making the choice of a CMS, let’s look at context by talking a little about the history of this technology. Finally, we will close by touching briefly on infrastructure since that is also an important piece.

 

  • A brief history of the CMS
  • How to choose a CMS – 3 basic steps
  • High-performance cloud hosting to drive your site

 

A brief history of the CMS

 

It helps to put a technology into perspective by looking at its history. For the content management system, the best place to start is the 1990s, as indicated by a short history put together by Emory University web development instructor Ivey Brent Laminack.

 

In the mid-90s, developers were still having difficulty getting proper display for their HTML pages. E-commerce sites were just about the only dynamic pages. Coders who were working to help build e-commerce sites were using ColdFusion or Perl. There was not yet a real established basis for online transactions or for the integrated management of content.

 

The web continued to progress of course (as it has been known to do). By the late-90s, there were languages such as PHP that were a better fit for the Internet. Industry professionals were beginning to realize that it was actually a wise idea to allow owners of websites to update and manage their own content. Because of this increased understanding that website owners needed that type of access (in other words, that this type of tool would have value), coders started writing content management systems; and, in turn, the CMS became a prevalent technology. The CMS made it possible for users to bring images from their own desktop computers online; create informational pieces and narratives; and boost the general engagement of web pages.

 

Things have changed quite a bit since the late-90s though. Initially, coders were coming up with their own software. That was basically the emergence of the custom CMS. The diversity at that time was nice, but people like ecosystems that can be standardized to an extent – and the business world wanted to monetize these types of systems. Hence, firms were created to build, sell, and support content management systems.

 

Some web-based CMSs were actually derived from document management systems. These systems were a way to keep a handle on all the files within a desktop, such as word-processing documents, presentations, and spreadsheets. Those systems were starting to get more widely used at about the turn of the millennium. Document management system software was particularly useful to large newspapers and magazines; in those organizations, full adoption was typically a six-figure project. Soon after the turn of the millennium, open source CMS choices started to become available and proliferate. Mambo and Drupal were two of the chief ones in the early years.

 

“For the first few years, they were only marginally useful,” noted Laminack, “but by about 2004, they were starting to be ready for prime-time.”

 

As emergence years for each type of CMS, Laminack lists 1997 for the custom CMS, 2000 for the proprietary CMS, and 2004 for the open source CMS.

 

How to choose a CMS – 3 basic steps

 

There are plenty of articles online arguing for one CMS over another. (As we know, WordPress has plenty of adherents, which is why it is the king of the market.) This is not a popularity contest, though. Let’s look at advice for choosing the best CMS:

 

1.) Consider why you want a new CMS.

 

As you start the process, think about your own goals. Consider the problems you want to address with the technology. Is there something about your current system that you particularly don’t like? Think about the negatives, too, advised business intelligence software-as-a-service firm Siteimprove. Are there elements of your current environment that you really want to leave behind? Try making a Requirements Matrix (aka a/n Features or Evaluation Matrix), to get a better sense of how well the different CMSs measure up against one another.

 

2.) Prioritize, above all else, usability and control.

 

The most important two things you want to look for in a CMS, as a general rule, are user-friendliness and the extent to which you have control, according to Chicago-based website design firm Intechnic. These two elements are intertwined. You want it to be easy to make updates to content; publish at the times you want; updates important parts of your site such as the terms of service; and create new pages. You need to have the control to be able to easily complete these types of tasks; they are central to the role of a content management system.

 

It is common for a CMS not to support many elements that you want to be able to integrate into your site. “This is unacceptable,” said Intechnic. “A good CMS needs to adapt to your business’ standards, processes, and not the other way around.”

 

3.) Look for other key attributes of a strong environment.

 

CMS Critic discussed the topic of CMS selection in terms of the characteristics that a user should want in one – and that they should see are present when they’re exploring options:

 

  • Usability: Note that this feature, discussed above as one of the pair that should be the underpinning of a CMS choice (per Intechnic), is listed first by CMS Critic.
  • Mobile-friendliness: You need a CMS that offers strong mobile capabilities since access from phones and tablets is now so much of the whole pie.
  • Permissions and workflow: There is an arc to content, running from its production to its editing, management, and auditing. A good CMS program will give you the ability to create workflows and otherwise simplify content management.
  • Templates: You want a system that has the ability to easily create templates. These templates should make it simple to copy content and to reuse the same structural format.
  • Speed and capacity to grow: The system you choose should have great performance (both strong reliability and high speed), along with scalability related to that performance so that you will not hit a wall as you grow.
  • Great search engine tools and on-site searchability: One of the most critical aspects of a CMS is its ability to get your message to potential customers through the search engines. Make sure the CMS offers tools, as with plugins, that can boost your SEO. You also want the ability to have visitors to your site search your site for great open-ended navigability.
  • Deployment agility: You want it to be possible to serve the CMS either on your own server or in an external data center (including cloud).
  • Broad & robust support and service: You want to know that you can get support and service, whether through the CMS provider or through the broader tech community.

 

High-performance cloud hosting to drive your site

 

Are you deciding on a CMS for your business? Beyond the process of figuring out the CMS that makes sense, you also need to figure out its hosting.

 

At Total Server Solutions, our cloud hosting boasts the highest levels of performance in the industry. See our High Performance Cloud Platform.

What Is Data Infrastructure

Posted by & filed under List Posts.

Where is your information? Would you describe it as being in your infrastructure or your data infrastructure? Let’s look at what data infrastructure is, why it is important, and the specific characteristic of reliability availability serviceability (RAS). Then we will close by reviewing some common issues and advice.

 

When you use the Internet, whether for business or personal reasons, you are fundamentally reliant upon data infrastructures. A data infrastructure is a backend computing concept (it is backend computing, essentially), so it is understandable that it is often called by names that don’t fit it quite as well – such as simply infrastructure, or the data center.

 

Infrastructure is a term that is used for the set of tools or systems that support various professional or personal activities. An obvious example at the public level, in terms of infrastructure that is maintained by the government, is the roads and bridges. These elements are the basic structures through which people can store, contain, or transfer themselves, products, or anything else – allowing them to get things where they otherwise couldn’t.

 

Infrastructure is, in a sense, support to allow for the possibilities of various functions considered essential to modern society. In the case of IT services specifically, you could think of all technological components that underlie that tool as infrastructure, noted Greg Schulz in Network World.

 

The basic issue is that infrastructure is an umbrella term for these functional, supportive building blocks. There are numerous forms of infrastructure that must be incorporated within an information technology (IT) ecosystem, which are best understood as layers. The top layer is business infrastructure (the key environments used to run your business). Beneath that layer is the information infrastructure, the software, and platforms that allow the business systems to be maintained and developed. Finally, beneath the information infrastructure is the data infrastructure, as well as the actual data centers or technological habitats. These physical environments can also be supported by outside infrastructure, especially networking channels and electricity.

 

Understanding the context in which data infrastructure (ranging from cloud hosting to traditional onsite facilities) exists, let’s explore the idea in its own right.

 

If you think of a transportation infrastructure as the components that support transportation (the roads and bridges) and a business infrastructure as the tools and pieces that support business interactions directly, you can think of data infrastructure as the equipment or parts that are there to support data: safeguarding it, protecting it from destruction, processing it, storing it, transferring it, and sending it – along with the programs for the provision of computing services. Specific aspects of data infrastructure are physical server machines, programs, managed services, cloud services, storage, networking, staff, and policies; it also extends from cloud to containers, from legacy physical systems to software-defined virtual models.

 

Purpose: to protect and to serve (the data)

 

The whole purpose of your data infrastructure is to be there for your data as described above – protecting it and converting it into information. Protection of the data is a complex task that includes such concerns as archiving; backup and restore; business continuity and business resiliency (BC/BR); disaster recovery (DR); encryption and privacy; physical and logical security; and reliability availability serviceability (RAS).

 

It will often draw some amount of attention when a widely used data infrastructure or application environment goes down. Recent outages include the Australian Tax Office, Gitlab, and Amazon Web Services.

 

The troubling thing about the downtime incidents that have been seen in these high-profile scenarios, as well as ones that were not covered as much outside of security circles, is that they are completely avoidable with the right safeguards at the level of the software and the data.

 

A large volume of disasters and other unplanned downtime could be reduced or eliminated altogether, said Schulz in a separate Network World piece. “[I]f you know something can fail,” he said, “you should be able to take steps to prevent, isolate and contain problems.”

 

Be aware that there is always a possibility of error, and any technological solution can experience a fault. People worry about the machines. Oh, it is easy to point fingers! However, the greatest areas of vulnerability are the situations in which humans are determining and controlling setup of computers, applications, plans, and procedures.

 

What is the worst-case scenario? If data loss occurs, it can be complete or partial. Features of data protection are both vast and granular, having to incorporate concerns related to diversely distributed locations, data center facilities, platforms, clusters, cabinets, shelves, and single pieces of hardware or software.

 

What is RAS?

 

Why must your data infrastructure have RAS? Reliability, availability, and serviceability is a trio of concerns that are used to architect, build, produce, buy, or implement an IT part. This concept was originally “deployed” by IBM when the company wanted to come up with standards for their mainframes; at that point, it was only a characteristic pertinent to hardware. Now RAS is also used to describe applications, networks, and other systems as well.

 

  • Reliability – the capacity of a component of hardware or software to meet the specifications its manufacturer or provider describes.
  • Availability – the amount of time that a computing part or service works compared to the entire time that the user expects it to work, expressed as a ratio.
  • Serviceability – the extent to which a piece of a computing environment is accessible and modifiable so that fixes and maintenance can occur.

 

Top issue #1: letting software take the lead

 

Both data infrastructures and threat landscapes are increasingly software-defined. Today is the era of the rise of the software-defined data infrastructure (SDDI) and software-defined data center (SDDC). In this climate, it is good to start addressing the field of software-defined data protection – which can be used to allow for better availability of your data infrastructure, along with its data and programs.

 

In today’s climate, the data infrastructure, its software, and its data are all at risk from software-defined as well as traditional threats. The “classic” legacy problems that might arise are still a massive risk; they include natural disaster, human error, glitches in programs, and problems with application setups. Software-defined issues run the spectrum from spyware, ransomware, and phishing to distributed denial of service (DDoS) and viruses.

 

Top issue #2 – getting ahead of the curve

 

We should be hesitant to shrug it off when there is a major outage. We should ask hard questions, especially the big one: “Did the provider lower operational costs at the expense of resiliency?”

 

4 tips for improved data infrastructure

 

Here is how to make your data infrastructure stronger, in 4 snippets of advice:

 

  1. You will want to consider the issue of resiliency not just in terms of cost but in terms of benefits – evaluating each of your systems in this manner. The core benefit is insurance against outage.
  2. Create duplicate copies of all data, metadata, keys, certificates, applications, and other elements (whether the main system is run third-party or in-house). You also want backup DNS so you cannot effectively be booted from the internet.
  3. Data loss prevention starts at home, with a decision to invest in RAS or to lower your costs. You should also vet your providers to make sure that they will deliver. Rather than thinking of data protection as a business expense, reframe it as an asset – and present it that way to leadership.
  4. Make sure that data that is protected can also be quickly and accurately restored.

 

Summary/Conclusion

 

Simply by thinking of your programs and data on a case-by-case basis, as well as implementing strategies (such as deduping, compression, and optimized backups) to minimize your data footprint, you will spend less money while creating better redundancy. To meet this need, your data infrastructure should be as resilient as possible – but also be attached to incredible support so you can quickly adapt.

 

Are you rethinking your data infrastructure? At Total Server Solutions, we believe strongly in our support – but, as they say, don’t just take our word for it. From drcreations in Web Hosting Talk: “Tickets are generally responded to in 5-10 mins (normally closer to 5 mins) around the clock any day. It’s true 24/7/365 support.”

 

We’re different. Here’s why.

Cloud integration

Posted by & filed under List Posts.

<<< Go to Part 1

6.) Consistency With New Releases – Martin Welker at Zenkit

Zenkit is a project management app. It allows you to perform tasks such as study your data analytics; find points of reference between the information that you may not have previously noticed; create filters and aggregations; and design formulas. Martin Welker, the CEO and founder of this company, sold his first application when he was only 15. He has been developing business productivity software for more than 20 years. His customer base, at 5 million, is just slightly over the entire population of South Carolina. The fact that Welker is paying so much attention to emergent data strategies should not be surprising, since it is aligned with his appreciation for cloud.

Welker explains that there are many reasons it makes sense to use cloud when you need app hosting. These are, in his opinion, the most compelling things that the technology has to offer:

  • It is reliable. You will often get much better reliability with a cloud backend than you will with one that you architect and build in an on-premise datacenter. You are able to include a Service Level Agreement that guarantees, by extension, the service levels that are guaranteed by a credible infrastructure provider. Your systems will be live 24/7, Welker says – and throughout, your uptime will be maintained at a high level (assuming you have a solid SLA and are working with a host that gets praise from third-party reviews). You will also get high reliability from the IT staff running your servers, because they will be monitoring around the clock, there to answer and resolve any late-night support questions or issues that arise.
  • It boosts your adoption. Cloud gives you a really low barrier to entry. People are used to web-based software. It’s possible to get people to register with a single click. Since everything is Internet-based, anyone who is online is a potential user or customer.
  • It is built for the peaks and valleys. You can scale – not just grow, but fluctuate in response to demand without having to change your servers. That means you don’t need hardware padding, so you can operate on a leaner model.
  • It is becoming the new normal. Cloud is becoming ubiquitous, which also means that it is the only place you will be able to find certain cutting-edge or sophisticated capabilities.
  • It is highly flexible. You are able to introduce and release new patches and updates to your software that get applied throughout all users – because no one has to download anything (whether an end-user or a system administrator). If you screw something up in the code, it is fast to go back to the most recent version.

7.) Allows for Size-Ambiguous IT – André Gauci at Fusioo

Fusioo is an online database app, and André Gauci is its CEO. Gauci notes that the chief element cloud has to offer that is often cost-prohibitive in a traditional setting is scalability. This strength, also mentioned by Welker above, means that, for example, you are ready for Black Friday on-demand in November but don’t have to be ready for it year-round.

8.) Ease of Access – Tieece Gordon at A1 Comms

Tieece Gordon of UK telecommunications provider A1 Comms notes that cloud is the best setting for working and storing information, because it allows you to embrace a characteristic that is fundamental for business success: versatility.

You are able to become more versatile because you can better connect with people from all areas. Your data environments are seamless and ready right now for anyone to be able to access data irrespective of where they are.

In this setting, Gordon points out, you get better productivity and better efficiency. “Instant and simple access means there’s more time to put towards more pressing operations than trying to find something lost within a pile of junk,” he says.

9.) Recurring Revenue – Reuben Yonatan at GetVoIP

Get VoIP is a voice-over-IP review site that is built on a cloud infrastructure. Reuben Yonatan is the company’s founder and CEO. Based on a decade of experience with enterprise infrastructure, he notes that one of the best parts about a cloud-hosted app is that it gives you recurring revenue. How? If you create a subscription model for your app, the price of continuing to develop the app is covered by customers automatically (as opposed to having to convince people to upgrade to a new version of it).

Agreeing with some of the other experts highlighted in this piece, Yonatan says that the cloud is a way to:

  • remove stress from software updates;
  • allow for simple and broad access;
  • save money in your budget; and
  • scale as needed, since the physical act of introducing hardware is unnecessary within a cloud (except at the level of the entire ecosystem).

10.) Minimized Resource Staff – Aaron Vick at Cicayda

Cicayda is a legal discovery software. And Aaron Vick is its Chief Strategy Officer. One of Vick’s core specialty areas is technology workflow. He is sold on cloud hosting for two primary reasons mentioned by others within this report: scalability and mass-updating across the whole user base. From his perspective, it is those two capabilities of this virtualized model that allow it to far surpass what was possible with systems built in the 1990s and 2000s. Because you can scale on-demand, that means you do not have to worry about your system regardless how many people are making requests to your servers. By being able to introduce an updated version of the software to the entire population of users simultaneously, it saves money – since you don’t need to fund the resource staff that would be needed to maintain a traditional installation environment.

11.) Innovative Edge – Jeff Kear at Planning Pod

Planning Pod is a cloud hosted SaaS registration and event management system. The way Kear sees it is that innovation is one of the most effective ways to generate attention and foster customer retention. Plus, it gives you a way to differentiate yourself – absolutely critical within a crowded market. The app is running within a behind-the-scenes backend instead of on user devices, the software company is much better enabled to introduce new aspects and mechanisms than if they distributed file updates that would not always get installed locally.

Plus, Kear pointed out that a software company is able to test features across the whole user community rather than testing with small groups of users.

12.) No Big Initial Investment – Mirek Pijanowski at StandardFusion

StandardFusion is a government, risk management, and compliance (GRC) app that is hosted in the cloud – intended to make maintaining compliance and security more user-friendly. Mirek Pijanowski, the firm’s cofounder and CEO, notes that cloud is a great technology to leverage because it means that you don’t have to directly manage equipment.

“Every year,” says Pijanowski, “we reevaluate the time and costs associated with moving our applications away from the cloud and quickly determine that with the dropping cost of cloud hosting, we may never go back.”

Conclusion

Are you in need of cloud hosting for your business? At Total Server Solutions, we believe that a cloud-based solution should live up to the promise described by the above developers and thought-leaders. It should be scalable, reliable, fast, and easy to use. We do it right.

 

***

 

Note: The statements by the developers that are referenced within this article were originally made to and published by Stackify.

This carnival ride goes to the cloud.

Posted by & filed under List Posts.

We all know about the growth of cloud computing. From a consumer perspective, the first thing that comes to mind is SaaS, or software-as-a-service. If we want to understand how seismic of a shift this form of technology is imposing on IT, though, we need to look at the building blocks, platforms and infrastructure.

Let’s look at cloud growth speed to better gauge this transition. Worldwide between 2015 and 2020, industry analysts from Bain & Company say that cloud IT services will expand from $180 billion to $390 billion — a head-turning compound annual growth rate (CAGR) of 17%. Who is buying these services? Well, it’s not just the startups, that’s for certain. 48 of the Fortune Global 50 have publicly stated that they are moving some of their systems to cloud hosting (some making the adoption more aggressively than others). These findings are from the Bain report “The Changing Faces of Cloud,” which also shows us a breakdown of the key cloud types: from 2015 to 2020, SaaS is predicted to grow at a 17% rate, while the combined group of PaaS and IaaS is forecast to expand at an almost staggering 27%.

Why? Why is cloud hosting growing so fast? Let’s look at why 12 app developers prefer cloud. (These are from statements credited to these individuals that were published by Stackify.)

1.) Download Speed – Jay Akrishnan at Kapture

Kapture, if you don’t know, is a customer relationship management (CRM) app. The company’s product marketer, Jay Akrishnan, says that what he likes about cloud is its ability to create a comprehensive data sync across all points of access. By achieving that unity of on-demand information, your constraints are removed from being able to fully embrace a communication system. Software is available to allow anyone to take advantage of its ability to exchange data, for business purposes such as collaborating between staff and with partners, and even for hobbies such as gaming. Akrishnan adds that cloud systems are generally stronger than what you could get with a VPN-based server.

Probably the most compelling point Akrishnan makes, though, is that the client must be able to rapidly download your app. If people cannot download very, very quickly, your app’s growth rate will suffer. Kapture relies on the cloud to deliver.

2.) Lowers Your Stress (to Increase Productivity) – John Kinskey at Access Direct

From a headquarters in Kansas City, AccessDirect provides virtual PBX phone systems to businesses around the United States. John Kinskey is its founder. The company hosts its core telephony app in the cloud – so the business is betting on the technology with its own performance and credibility.

Kinskey says that cloud is powerful because it creates multiple redundancies for your infrastructure by distributing your servers across various geographically disparate data centers. Also, and interestingly, the third-generation entrepreneur notes that the company moved this core software to cloud because of a couple pain points of operating the systems in-house. One was that they did not feel like they had enough redundancy. The other was that he felt they often had to spend too much at a time on labor and budget to maintain their own machines. To AccessDirect, he notes, both of these elements were causes of stress that were relieved by cloud.

Since Kinskey mentioned that aspects of being in a legacy scenario can be stressful, that emotional side should not be overlooked. If your current approach of using your own data center for an app is making your workplace or your own thoughts less calm, a couple of reports from risk management and insurance advisory Willis Towers Watson – both on its Global Benefits Attitudes surveys – are compelling. The 2014 report found that there was a high correlation between stress and lack of motivation on the part of staff: 57% of people who said their stress level was high also said that they were disengaged. Compare that to a 10% disengagement level among those who say their stress is low. Add to those numbers a key finding revealed by the 2016 report – that fully three-quarters of employees say stress is their top health concern. The bottom line is that if hosting your own app is stressful, it’s a wise business decision to move to cloud.

3.) Real-Time Decisions – Lauren Stafford at Explore WMS

Explore WMS actually gives us a great perspective because it is not software itself but rather an independent resource for supply chain professionals. The important concern for the journal is still the same as its leadership – knowing that this form of hosting can efficiently deliver warehouse management software. The media outlet’s digital publishing specialist, Lauren Stafford, notes that the key need she feels is met by cloud hosting is immediate data access. Integration is often tricky in an on-premise setting since you need to think about how to configure servers; in the cloud, she explains that you are able to build more flexibly. By getting real-time data to the end user, the company has the insight it needs to make better decisions, moment by moment.

In the case of warehouse management systems (the critical point for Explore WMS), business clients are able to get actionable real-time inventory details with a status that is reliable and relevant right now – important especially if a shipment gets delayed.

4.) Meets Simultaneous Needs – Dr. Asaf Darash at Regpack

Regpack is an online registration portal that has some big-name clients, such as Goodwill, the NFL, and Stanford. The system is designed using the knowledge its founder, Dr. Asaf Darash, attained while earning a computer science PhD focused on data networks and integration. It is notable, given that substantive academic background, that Darash believes in cloud hosting. He says that cloud is the best choice because it allows you to be able to see and work with your information from any location.

Interestingly, he also points out that when offering software-as-a-service, using cloud allows you to meet that need for your clients since they share that desire to work when they are at home or on a trip to another state or country. In other words, cloud meets that need simultaneously for both you and your clients. You can access your data with a web connection, take a look at your analytics, and grab whatever need-to-know details are in the system rather than having to download everything at the level of your own in-house server.

5.) Solves Problems Before They Happen – Kevin Hayen at Let’s Be Chefs

Let’s Be Chefs is an app-based weekly recipe delivery service, so it relies on the cloud for timely interaction with all of its service’s members. Hayen says that there are far fewer operations-related frustrations (to simply keep everything together and moving) that suck time away from laser-focusing the firm on growth and development. Scaling machines is no longer an issue for Let’s Be Chefs. In this way, it creates immediate peace of mind for startups, he says.

Conclusion

Check out the 7 other perspectives on cloud ‘s benefits for app hosting. Or, do you want high-performance cloud hosting for your application right now? At Total Server Solutions, we give you the keys to an entire platform of ready-built, custom-engineered services that are powerful, innovative, and responsive. Spin it up.

5 Ways Cloud Computing Helps Your Business

Posted by & filed under List Posts.

Companies across a broad spectrum, from shoestring startups to Fortune 500 enterprises, are wondering how they can better incorporate cloud computing into their organizations. There are manifold ways in which this technology can improve your efficiency and results. First, let’s look at the growth numbers for cloud to see how much is currently being invested in these systems and tools.

 

Gartner underestimates cloud growth by $600 million

 

One way to know how fast the need for public cloud services is growing is that the industry analysts are having a hard time keeping up with it. In September 2016, Gartner announced its updated projection for the market: that the sector would expand from $178 billion in 2015 to $208.6 billion in 2016 – a 17.2% rise. The primary reason for the increase is one of the three primary cloud categories, infrastructure as a service (IaaS) – forecast to skyrocket with a 42.8% revenue bump in 2016.

 

While that type of fast growth may sound unsustainable, in February, Gartner issued a new analysis for 2017 with an increased growth rate of 18%. The projection is an increase from $209.2 billion in 2016 to $246.8 billion in 2017. Again, the most powerful expansion will occur with IaaS, only slightly decelerating from its breakneck pace to climb to $34.6 billion in 2017, a 36.8% ascension. Do these new projections seem overly optimistic? Gartner is simply responding to its own underestimation of the segment; note that $209.2B starting figure for 2016, outdoing the $208.6 billion prediction from September. That may not seem substantial comparing the totals, but it means there was an additional $600 million of business generated that Gartner did not foresee.

 

What are the specific ways that cloud can help your business, though? How does it empower your mission and goals?

 

By facilitating ease-of-use

 

Installing security patches and software updates can be tedious and time-consuming. Since that’s the case, smaller companies will often use IT contractors who can get overbooked and fall behind on individual client needs. Alternatively, sometimes the IT role is taken up by a full staff member such as the office manager, who does not have the time or training to handle every issue. These ways of approaching the digital world can be expensive, labor-intensive, and unnecessarily stressful. By using a cloud provider to deliver your IT environment, explains entrepreneurial mentoring nonprofit SCORE, everything will be patched and updated automatically. Do you have an IT team? In that case, says the association, cloud is still a strong move by relieving your employees of maintenance tasks so that they can be squarely focused on emergent tech and new business development.

 

By delivering scalability so your company can grow

 

The elasticity of cloud systems gives you access to new resources on-demand, so that you can increase the depth of your infrastructure for peak times such as the holidays or after you get press, and giving you the ability to grow your backend exponentially if you hit a tipping point. This characteristic is critical because it is challenging for firms to figure out what they will require in computing resources. Cloud sets aside the guessing game, letting you react as traffic or user behavior changes, making sure you have enough fuel to keep expanding without waste. Growth is not just about customers, of course, but adding your own systems. If you add a collaboration tool, the resources are immediately “at the ready” to allow it to run effectively. Your company is better able to adapt in the moment; in other words, it is more flexible.

 

By protecting you from Internet crime

 

Sony, Target, and Home Depot have all been ferociously hacked on a massive scale. Actually, hackers have taken huge strikes at the federal government too. There were numerous reports that hackers (believed to be Russian) who had intruded into the White House and State Department email system in 2014 were continuing to evade the government’s efforts to remove them. In this landscape, security is increasingly challenging. Furthermore, while these examples of cyberattack are so vast in scale that it may make you think your business is too small to interest intruders – but in fact, small businesses are particularly vulnerable. Statistics are compelling along these lines:

 

  • The threat is real. Incredibly, 2012 figures from the National Cyber Security Alliance show that 1 out of every 5 small companies were already being hacked annually. Among the group that was infiltrated at that time, the NCSA estimated that 3 in 5 were bankrupt in just 6 months.
  • When you go offline, your revenue suffers. According to Andrew Lerner of Gartner, polls of business executives suggest that the average cost of downtime is $5600 per minute. What about going down for an hour? In that case, the average loss is $336,000.
  • Let’s take an example of a DIY tool that businesses often fail to protect. Many, many small businesses use WordPress. The W3Techs Web Technology Surveys (accessed September 5, 2017) show that WordPress is used on 59.4% of sites that have identifiable content management systems – translating to 28.6% of all sites globally. WordPress is a big target because it is so popular; for example, Threatpost reported in February that “attackers have taken a liking to a content-injection vulnerability disclosed last week and patched in WordPress 4.7.2 that experts say has been exploited to deface 1.5M sites so far.”

 

The point? In this dangerous context, the case for digital security is compelling. Specifically related to #3 above, that is just one example of the security risks in play that could be obstacles to business. Reducing the risk of online attack increases the strength and certainty of your firm’s development.

 

Enter the cloud: credible, knowledgeable partners substantially boost the safety of most small business data (provided you do not have industrial-grade security mechanisms and monitoring on-site). As SCORE advises, “Storing your data in the cloud ensures it is protected by experts whose job is to stay up-to-date on the latest security threats.”

By allowing your team to work together

 

A lot of business discussion lately has been about the value of collaboration. For instance, one of the most often-praised benefits of an open office layout is how it enhances collaboration (although, there is definitely not complete agreement that open offices deliver on their promises). Regardless how effective that design approach is, it signals how important the integration of numerous people’s perspectives is to business.

 

How can cloud help with this business needs? It is collaborative by design. Cloud-hosted apps are accessible 24/7 from virtually any web-connected device. That means, effectively, that your business no longer has walls in terms of your ability to let people interact with your systems to meet your business objectives. Through a cloud ecosystem, personnel and other partners in widely distributed geographical locations can work together on the same file (which is automatically backed up if you’re using a high-quality provider).

 

Through a well-managed cloud platform, people from all over the country, and internationally, can contribute to the project without having to worry about repeatedly passing files back and forth through email. While email is still a comfort zone for many, its model of sending files back and forth is less efficient than the cloud. Plus, it creates more potential for a space-time paradox, the accidental creation of a second “working copy” of a project (the results of which Doc Brown warns could “destroy the entire universe”).

 

By being ready to launch, now

 

Another key point about cloud, mentioned above but that deserves its own attention, is that there is no ramp-up time for a cloud system. You can access one today.

 

Do you need a cloud system that is easy to use, highly scalable, reliable, fast, and secure, so you can start collaborating at a moment’s notice? At Total Server Solutions, we do it right.

Split Testing E-Commerce Revenue

Posted by & filed under List Posts.

Sometimes it can be difficult to figure out exactly what it is that is making your company’s growth plateau or shrink. In fact, it is often challenging to even perceive some potential culprits because they seem so fundamentally beneficial. Nonetheless, it is important to ask hard questions – and, in so doing, put different aspects of your company under a microscope – if you want to grow. (For example, have you truly adopted high-performance hosting so that your infrastructure is furthering UX?)

 

In that spirit, here’s a question: Is it possible that split testing (or A/B testing) could be hurting your e-commerce revenue? Clearly, the concept behind split testing is a sound one: by presenting different versions of a page being shown to a random portion of your audience, you should be able to determine which one is preferable based on how well each version is able to turn visitors of the site into users or customers. This method has even somewhat controversially been used by major newspapers to split-test headlines, driving more traffic to news stories to keep the outfits prominent in the digital era.

 

A/B testing seems to be a smart way to better understand how your prospects and users make decisions; so how could it hurt your revenue? Online growth specialist Sherice Jacob notes that the trusted, somewhat standardized practice often does not deliver the results that business owners and executives expect. Jacob points out that this form of digital analysis, somewhat bizarrely, “could be the very issue that’s causing even the best-planned campaign to fall on its face.”

 

In a way, though, it’s not bizarre. Thoughtful business decisions often have unexpected results. (Anything can be done well or poorly – such as your choice of host, which will determine whether your infrastructure is secure. Failure to look for SSAE-16 auditing is an example of a mistake made when picking a web host.) What mistakes can be made when split testing? How and why does it fail? Let’s take a look.

 

  • How many tails do you have?
  • The magic of split testing: Is it all an illusion?
  • Getting granular – 6 key questions for core hypotheses
  • SEO hit #1 – failure to set canonicals
  • SEO hit #2 – failure to delete the losing option
  • Results from your e-commerce hosting

 

How many tails do you have?

 

Analytics company SumAll put two copies of a page that were identical – with no differences whatsoever – into one of the most well-known split-testing tools, Optimizely. Option A beat option B by almost 20%. Optimizely fixed that particular issue; nonetheless, it does reveal how misleading the output from these experiments can be. Think, after all, if those pages had just one minor difference. You would then confidently assume that A was the better choice, and feel backed up by the software’s numbers.

 

The reason that an issue such as this might arise with A/B testing is due, fundamentally, to the design approach taken for the algorithms built into the program. These approaches are categorized as one-tailed and two-tailed. One-tailed tests are simply trying to find a positive connection. It’s a black-and-white solution. With just one tail, your weakness is the statistical blind spots, says Jacob. Two-tailed testing looks from two different angles at these e-commerce outcomes.

 

The distinction made by the UCLA Institute for Digital Research and Education helps to clarify:

 

  • One-tailed – Testing that is based on determining whether there is a relationship from a single direction “and completely disregarding the possibility of a relationship in the other direction.”
  • Two-tailed – No matter which direction you use to address the relationship, “you are testing for the possibility of the relationship in both directions.”

 

The magic of split-testing: Is it all an illusion?

 

In 2014, conversion optimization firm Qubit published a white paper by Martin Goodson with the shocking title, “Most Winning A/B Test Results are Illusory.” In the report, Goodson presents evidence that shows that poorly performed split testing is actually more likely to lead to false conclusions than true ones – and, well, um, bad information should not be integrated into e-commerce strategy.

 

The crux of Goodson’s argument comes down to the concept of statistical power – which can be understood by thinking of a project in which you want the find out height differences between men and women. Only measuring one member of each sex would not give you a very broad set of data. Using a larger population of men and women, getting a large set of heights by measuring a lot of people, will mean that the average height will stabilize and the actual difference will be better revealed. As your sample size grows, you access greater statistical power.

 

To get back to the notion of split-testing, let’s say that you have two variants of the site you want to assess. Group A sees the site with a special offer. Group B sees the sites without it. You simply want to calculate the difference in response based on the presence of the offer. The difference between the two results should be considered in light of the statistical power (amount of traffic).

 

What is the significance of statistical power? Knowing what you want in a sample size (volume of traffic) will ensure that you don’t stop the testing before you have collected enough data. It is easy to stop and see false positives that lead you in the wrong direction.

 

Goodson says to think of a scenario in which two months would give you enough statistical power for results to be reliable. A company wants the answer right away, so they test for just two weeks. What is the impact? “Almost two-thirds of winning tests will be completely bogus,” he says. “Don’t be surprised if revenues stay flat or even go down after implementing a few tests like these.”

 

Getting granular – 6 key questions for core hypotheses

 

You want results that are meaningful from these tests. Otherwise, why bother? Think in terms of possible sources of confusion or frustration for visitors, either at the level of the hook or within the funnel, advises Qualaroo CEO Sean Ellis. Get this information directly from users via surveys or other comments.

 

Based on those bits and pieces, come up with a few hypotheses – your hunches about what you can do that might improve the conversion rate or give you better business intelligence. You can see whether or not those hypotheses are correct using the A/B tests, via an organized testing plan. A testing plan will make it much easier to strategize and consistently collect more valuable information.

 

These 6 questions can guide you as you develop your testing plan, says Ellis:

 

  1. What is confusing customers?
  2. What is my hypothesis?
  3. Will the test I use influence response?
  4. Can the test be improved in any way?
  5. Is the test reasonable based on my current knowledge?
  6. What amount of time is necessary for this test to be helpful?

 

That short list of questions can help you become more sophisticated with your A/B testing to avoid false positives and use the method in its full glory.

 

SEO hit #1 – failure to set canonicals

 

Split testing can hurt your SEO also. You need to set canonical URLs for each page because these two almost identical versions of the same page cause confusion for search engines.

 

SEO hit #2 – failure to delete the losing option

 

Another issue for SEO (and in turn for your revenue, if not your conversion) that can be caused by A/B testing is when you do not delete the page that loses in the comparison. That’s particularly important if you’ve been testing the choices for a while – since that generally means that the search engines will have indexed it.

 

“Deleting it does not delete it from search results,” notes Tom Ewer via Elegant Themes, “so it’s quite possible that a user could find the page in a search, click it, and receive a 404 error.”

 

Results from your e-commerce hosting

 

Just as you want to see impressive results (and not a downturn) from your split-testing, you want your hosting to be working in your favor as well – and for online sales, security and performance are fundamental. At Total Server Solutions, compliance with SSAE 16 is your assurance that we provide the best environmental and security controls for data & equipment residing in our facilities. See our high-performance plans.

How to Set Up a Non-Blog WordPress Site

Posted by & filed under List Posts.

While WordPress excels as a blogging platform because that was its original intended function, it has become increasingly sophisticated as a general tool to build websites. You could create an e-commerce shop, a portfolio, or a business site in this way.

 

Note that you could easily include (either upfront or at whatever point) a blog with that site if you want – as indicated below with the discussion of a page for posts. The blog does not have to be the defining centerpiece of your site, though.

 

Here is how you would go about setting up your WordPress as a static site, drawing a lot of ideas from WordPress themes and plugin company DesignWall.

 

  • What exactly is a static site (vs. a dynamic one)?
  • How to set the homepage of your static WordPress site
  • Creating the menus of your site
  • How to make your non-blog WordPress site stand out
  • The option of doing posts within their own page
  • Great hosting for strong WordPress UX

 

What exactly is a static site (vs. a dynamic one)?

 

A static site will have a homepage that does not change, no matter what new content you have go up elsewhere. That is in contrast to a dynamic site, which would be changing as you add new material – displaying the most recent posts from your blog. (Note that to be completely technically accurate, your site will remain dynamic if you use WordPress as its basis no matter what; however, you are essentially giving your site a static face regardless of its specific designation from a technical standpoint.)

 

The homepage will always use the same exact page – so let’s talk about that aspect.

 

How to set the homepage of your static WordPress site

 

You will be able to move forward with establishing this page whether or not you are just getting started with a new installation. Don’t worry about exactly what you want to say. You are able to create the page and then go back into it to figure out exactly what your message will be. Just follow these 6 steps:

 

  1. 1. Log into your WP admin account
  2. 2. Click on Pages in the left-hand sidebar, and select Add New Page.
  3. 3. Give it the simple name “Homepage” for now (which can be changed later).
  4. 4. Your theme may give you the option to turn off Comments and Pingbacks, typically both listed under “Discussion.” If those options are not available there, you will see them as small checkboxes on each page in the upper-right-hand corner above where it says, “Publish.”
  5. 5. To test and *go live with this page*, go into Reading Settings, which is within Settings in the sidebar.
  6. 6. There, you will see Front page displays, and you want that to be “A static page”; to complete this option, select “Homepage” and then Save Changes.
  7. 7. Look at your site, and you should see the Homepage displayed as your homepage.

 

Creating the menus of your site

 

It is time to establish menus for your static WordPress site. However, before we move forward with menus, think about what other pages you will need, and go ahead and create draft versions of those. Just create the pages at this point, without concerning yourself about the content. By having these pages at least in very rough place-holding form, you will be able to set up your navigation menu in a more logical and meaningful way.

 

Go ahead and add those pages the same as you did the Homepage. They can have simple names at this point. Beyond a homepage, here are the “must-have” pages for a 10-page business site, according to custom WordPress theme firm Bourn Creative: About, Services, Products, FAQ, Testimonials, Contact, Privacy Policy, Newsroom, and Portfolio. Adjust as makes sense for your industry and company.

 

Now go to the left Sidebar, click on Appearance, and select Menus. Here, you will see that you can add any of the pages you just created to your menu, which (depending on your theme) is typically displayed on your main header or in the sidebar.

 

You can nest whatever of the menu items you want by dragging and dropping it into position.

 

It is also possible to change names of menu items to whatever you want (without having to rename the linked page). Go into Menu Settings, and you can automatically add pages to the menu if that makes sense to you.

 

It is not necessarily a good idea to add pages automatically. That’s because you could end up with a lot of clutter. Probably you want certain pages to be especially prominent (e.g. About Us, Products or Services, etc.).

 

You may also have the option within your theme to change where this menu can be seen on your site.

 

How to make your non-blog WordPress site stand out

 

Probably, you do not want a mediocre non-blog WordPress site. You want a great one. Here are some tips on how to make it stronger from Alyssa Gregory on SitePoint:

 

  1. 1. Choose a strong theme. Gregory notes the importance of the theme in terms of how your content will be displayed. You may not want to have dates in your posts, for instance. Something with a magazine format will typically work well.
  2. 2. Figure out how pages and posts make sense. Gregory also mentions that you do not have to set up a non-blog WordPress site as a series of pages; you can use posts instead. However, using pages is more organized, from her perspective. If you do use posts or are dedicated to that structure for whatever reason, her advice would be to stick to it – because trying to create a hierarchy that crosses between pages and posts on a non-blog WP site could quickly get confusing. However, you can really use both in a meaningful way as long as the posts all appear within a certain setting, on their own distinct page (so you have someplace that you’re building content, even if it’s not the basis of the site). See below on that.
  3. 3. Dig into the code. Inevitably, the theme will need a little adjustment “under the hood.” That will allow you to clear out some of the more blog-centered elements that are built into the theme. An example would be when you turn off the ability to comment. You may still have a No Comments line in many themes, but that could be removed at the level of the code. It is usually also a good idea to clear out the RSS subscription option and anything else that is more of a reference to blogging than to a website without the blogging function.

 

The option of doing posts within their own page

 

You do not have to have a page for your Posts. However, if you do use the Posts on a non-blog site, you will want to organize them within a page so that the non-blog structure is the basis for everything. Actually, it does not hurt to create this page, says WPSiteBuilding.com, even if you don’t use it at this point. Generally, a blog is considered a good idea for search prominence and general engagement. This page could be called Blog or News or Thoughts or Updates, whatever you want. Just put a title, without anything on it. To test, publish that page.

 

Great hosting for strong WordPress UX

 

Are you wanting to deliver the best user experience through your non-blog WordPress site? At Total Server Solutions, we are always working to find the best, most effective ways to serve you and provide solutions to help you meet your challenges. Explore our platform.

Could IoT Botnet Mirai Survive Reboots

Posted by & filed under List Posts.

Mirai has been making a zombie army of swaths of the internet of things, so it is no wonder that manufacturers are taking steps to protect against it. However, one IoT device manufacturer’s failed attempt to protect its products against the botnet (used in massive DDoS attacks) shows how challenging this climate has become. When IoT-maker XiongMai, based in China, attempted to patch its devices so that the malware would be blocked, the result was described as a “terrible job” by security consultant Tony Gee.

 

Gee explained that he took products from the manufacturer to a trade convention, the Infosecurity Europe Show. The Floureon digital video recorders (DVRs) used in Gee’s demo did not have telnet open on port TCP/23 – but shutting off telnet access was insufficient as a defense.

 

Gee went through port 9527 via ncat. The passwords matched those of the web interface, and it was possible to open a command shell. Within the command shell, Gee opened a Linux shell and established root access. From the root user position, it was simple to enable telnet.

 

For devices that have telnet closed down, the device is hackable via shell and restarting the telnet daemon, explained Gee, adding ominously, “And we have Mirai all over again.”

 

  • Tale of an immortal zombie
  • How could Mirai grow larger?
  • The doom and gloom of Mirai
  • How to protect yourself from DDoS
  • Layers of protections
  • What this all means “on the ground”

 

Tale of an immortal zombie

 

Mirai is changing, much to the frustration of those who care about online security. Prior to this point, malware that was infecting IoT devices (such as routers, thermostats, and CCTV cameras) could be cleared away with a reboot.

 

A method was discovered in June that could be used to remotely access and repair devices that have been enslaved by the botnet. The flip side of this seemingly good news is that the same avenue is a way that a Mirai master can generate reinfection post-reboot – so researchers did not release details.

 

Notably, BrickerBot and Hajime also have strategies that try to create a persistent, “immortal” botnet.

 

The researchers did not provide any specific information about the vulnerability out of concern that it would be used by a malicious party. The firm did list numerous other weaknesses that could be exploited by those using the botnet.

 

How could Mirai grow larger?

 

What are other possible paths of exploit that would allow Mirai to grow even larger than it is now? Those include:

  • DVR default usernames and passwords that can be incorporated into the worm element of Mirai, which uses brute-force methods through the telnet port (via a list of default administrative login details) to infiltrate new devices.
  • Port 12323, an alternative port used as telnet by some DVR makers in place of the standard one (port 23).
  • Remote shell access, through port 9527, to some manufacturer’s devices through the username “admin” and passwords “[blank]” and “123456.”
  • One DVR company that had passwords that changed every single day (awesome), but published all the passwords within its manual on its site (not awesome).
  • A bug that could be accessed through the device’s web server, accessible through port 80. This firmware-residing buffer overflow bug currently exists in 1 million web-connected DVR devices.
  • Another bug makes it possible to get password hashes from a remote device, using an HTTP exploit called directory traversal.

 

The doom and gloom of Mirai

 

The astronomical expansion of Mirai is, at the very least, disconcerting. One recent report highlighted in TechRepublic found that Internet of Things attacks grew 280% during the first six months of 2017. The botnet itself is at approximately 300,000 devices, according to numbers from Embedded Computing Design. That’s the thing: Mirai is not fundamentally about IoT devices being vulnerable to infection. It’s about the result of that infection – the massive DDoS attacks that can be launched against any target.

 

Let’s get back to that infected and unwitting Frankenstein-ish army of “things” behind the attacks, though – it could grow through changes to the source code by hackers, updating it to include more root login defaults.

 

The botnet could also become more sophisticated and powerful as malicious parties continue to transform the original so that it has more complex capacities to use in its DDoS efforts. Today it has about 10 vectors or modes of attack when it barrages a target, but other strategies could be added.

 

How to protect yourself from DDoS

 

Distributed denial of service attacks from Mirai really are massive. They can push just about any firm off the Internet. Plus, the concern is not just about that single event of being hammered by false requests. Hackers first open up with a toned-down attack, a warning shot that is often not recognized as a pre-DDoS by custom in-house or legacy DDoS mitigation tools (as opposed to a dedicated DDoS mitigation service). These dress-rehearsal attacks, usually measuring under 1 Gbps and lasting 5 minutes or less, are under the radar of many DDoS protection solutions that have settings requiring attack traffic to be more substantial.

 

When DDoS started more than 20 years ago, engineers would use a null route, or remote trigger blackhole, to push the traffic away from the network and prevent collateral damage to other possible victims.

 

Next, DDoS mitigation became more sophisticated: traffic identified as problematic on a network was redirected to a DDoS scrubbing service – in which human operators analyzed attack traffic. This process was inefficient and costly. Also, remediation often did not get started right away following detection.

 

Now, DDoS protection both must be able to “see” a DDoS developing in real-time and have the ability to gauge the DDoS climate for trends, generating proactive steps to mitigate an attack. Enterprise-grade automatic mitigation protects you from these attacks and maintains your reliability.

 

Layers of protections

 

There are various levels at which distributed denial of service can be and should be challenged and stopped. First, a DDoS protection service against real and present threats, built by a strong provider, can effectively keep you safe from these attacks – but there are other efforts that can be made as well. Internet service providers (ISPs) can also protect the web by removing attack traffic before it heads back downstream.

 

Defense should really be at all levels, though. The people who make the pieces of the IoT – software, firmware, and device manufacturers – should build it with protections in place from the start. Installers and system admins should update passwords from the defaults and patch any intrusions as possible.

 

What this all means “on the ground”

 

It’s important to recognize that this issue is not just about security firms, device manufacturers, and criminals. It’s also about, really, all of us: the home users of devices, such as the DVR. (If you don’t know, a DVR is a device that records video on a mass storage device such as an SD memory card or USB flash drive… when it isn’t busy being used in botnet attacks).

 

The home user should be given reasonable security advice. Many users do not respond quickly when new patches are released. IoT devices are often built just strongly enough that they can operate; security is not a priority. That is bad – but that means users need to do their homework on security prior to purchase. They need to change the password from default to complex and randomized ones.

 

Protect yourself from Mirai

 

What can you do to keep your business safe from Mirai and other DDoS attacks? At Total Server Solutions, our DDoS mitigation & protection solutions keep your site up and running, your content flowing, and your customers buying, seamlessly. How does it work?