Cloud Items

Published by in Cloud Concepts

USING CLOUD IAAS PROVISIONING TIME AS A INDICATOR OF FUTURE PERFORMANCE

As someone who has tested and used multiple public cloud providers and local virtualization platforms, I can unequivocally say that the amount of time it takes to provision a server through the providers UI or scripting environment is a good indicator of the overall performance of that cloud providers infrastructure. Further, that performance level with have a direct financial impact on every role that uses that cloud service from development, infrastructure maintainers, and end users. There are many, many criteria that can affect the purchase and use of a public cloud provider, but the first thing testers must do is compare how long it takes to provision servers in a public cloud candidate. That comparison alone will be very telling and can eliminate the poor performing clouds from consideration. Poor performance is a cost pit that should be, and easily can be, avoided with some very simple and inexpensive testing.

CLOUD PERFORMANCE IS A CRITICAL ASPECT FOR CHOOSING A PUBLIC CLOUD PROVIDER

The internal performance of a public cloud provider has significant ramifications for a variety of aspects of a public cloud solution. One major consequence is the overall cost of a solution that is based on pure usage could end up being significantly higher for a poor performing cloud provider. One company that provides some pretty solid evidence around Public Cloud Provider Performance is Cloud Spectator. There main focus is IaaS, but the results are relevant to others aspects as well. For example, if a vendor builds a SaaS solution on a poorly performant public cloud, they would be paying more for each client than a highly performant cloud that has the same cost and billing structure.

Check em out – http://www.cloudspectator.com/

THE RECENT CELEBRITY HACKS ARE CREATING ISSUES FOR CLOUD PROVIDERS

All hacks that allow for the exposure of confidential data are of concern, but the entire scenario needs to be examined to determine where the fault really lies here. Some links to various perspectives below.
http://www.ibtimes.com/celebrity-hack-forces-cloud-computing-address-privacy-1676252

http://www.channelweb.co.uk/crn-uk/opinion/2363399/star-exposure-no-joke-for-cloud-providers

http://www.fiercecio.com/story/shining-light-shadow-it/2014-09-04

http://cloudtweaks.com/2014/09/cloud-blame-app-invasion-part-1-fallout/

99 Days of Freedom

Published by in Uncategorized

There is an interesting project/experiment that helps users see what it is like to be free from Facebook for 99 Days or more.

If you want to check it out, head over to:

http://99daysoffreedom.com

Big Data and Cloud Security

A recent presentation about using Big Data approaches for discovering malicious behavior in massive amounts of machine data from a variety of sources.

 

Cloud Contract Info

Published by in Cloud Concepts

As all forms of Cloud Computing become more popular, it is increasingly important to understand the contractual implications of integrating cloud providers into an organizations computing portfolio. Recently, Stanford Technology  Law Review posted a great paper that covers the fundamental aspects of cloud contracts. It is a must read for anyone interested in contractual concepts and issues with cloud providers and those involved in the due diligence around using cloud technologies in their organization.

Click HERE for access to the Stanford Technology Law Review website.

 

Billion Node Cloud

Published by in Cloud Concepts

The concept of a Billion Node Cloud is not necessarily new. What is unique about this improved version is the inclusion of MicroServers and Wireless technologies to allow Cloud Computing to happen anywhere – not just in monolithic data centers.

http://www.billionnodecloud.com

CriKit Desktop Private Cloud

Published by in Cloud Concepts

Since I am the creator of CriKit, let me explain a bit about it’s genesis. Like many others in the Private Cloud space, I was very frustrated with the state and cost of Private Cloud platforms. I have been watching the entry point for Private Cloud hardware and software dropping over time. However, it was still nowhere near the level for a small company or individual tinkerer to acquire. It was still in the realm of larger businesses with budgets for projects. I set out to change all that and make Private Cloud platforms available to the masses at a reasonable price point. The vision was to create a Desktop Private Cloud platform that is Compact, Powerful, Energy-efficient and Reasonably Priced. The result of a lot of effort is CriKit – Cloud Resource and Infrastructure Kit. Make no mistake, it may be small, but it is very capable and only sips electricity to provide an entire Private Cloud platform. Further, in many critical ways, CriKit also represents the future of small business computing. It can run as many virtual servers as can be accomplished by adjusting CPU and Memory and it can certainly participate in a Hybrid Cloud arrangement, so it has all the makings of a future cloud solution now. Today, a company can run 8 virtual servers on each node and burst to public clouds if necessary. With 4 nodes, that is 32 virtual servers, plus burst capabilities to public clouds. That sounds like a nice small business solution to me. However, that is not CriKit’s initial audience. It is aimed at people like me and maybe you. Those that need to develop, test, evaluate, investigate and educate on Private Cloud technologies. I want to run Eucalyptus today and Nimbula tomorrow, and Open Nebula the next and save my entire multi-node configurations so I can build what I want, when I want. With CriKit, I can do that. CriKit is unique in the market today but others will follow. As a solution, it makes too much sense and is too valuable to certain roles for companies not to develop something like it. But, for now, CriKit is the only one of its kind and if you develop, test and educate around multi-node cloud technologies and you want an entire cloud on your desk, it is ok to drool. 32 threads, 64GB of RAM in 4 nodes, Multi-TeraBytes of Storage, a powerful Management/Development Workstation plus KVM and Network switches make this one very, very cool Desktop Private Cloud solution ….

http://www.usmicro.com

Uncertainty Principle of Cloud Computing

This concept primarily applies to Public Cloud environments and should be considered when determining the ROI of any Cloud-based solution.

Morse Uncertainty Principle of Cloud Computing© (MUPCC)

The primary objective of business computing is to run applications that provide a benefit to the organization. MUPCC postulates that, in Cloud Computing environments, the amount of the environment being used specifically for the business application is adversely affected by variables that are dynamic and arguably uncontrollable. MUPCC has direct relevance to the ROI from Cloud Computing environments.

In Cloud Computing implementations there is a wide variety of software that is loaded and running on the application compute node. This software includes the Hypervisor, Operating System(s), General Management, Monitoring/Metering, Firewall, Antivirus, Backup, and others depending on the infrastructure design. MUPCC contends that the CPU, Disk, Network and Storage utilization of the combination of all the node software is sufficiently dynamic and unpredictable so as to adversely affect the primary application workload and reduce the business benefit expected from that workload relative to the investment in the environment.

When you add the effects of random workload migration between physical compute nodes in a virtualized environment, the adverse effects on the compute environment are increased because all workloads on both nodes – the sending node and the receiving node- are affected by the migration. CPU, Memory, Networking and Storage are consumed to perform the migration which affects all other workloads on the node. Further, depending on the overall computing environment, several, to many, nodes may be affected by inter-node workload migration because network bandwidth and storage are shared by some number of additional nodes.

The high level variables that affect each compute node include:

1. Primary business application demand
2. Number of installed Software Applications
3. Efficiency, quality and operational characteristics of all installed software
4. Virtual Machine density
5. Amount of node memory consumed at any given time
6. Amount of node CPU consumed at any given time
7. Amount of node-available network bandwidth consumed at any given time
8. Amount of data transferred in the storage environment at any given time
9. Number of VM node-to-node migrations at any given time
10. Policies and processes of the organization that affect the computing environment – backup schedule, VM snapshots, antivirus scans, patching schedule, etc.
11. Random Operating System behavior – indexing, disk maintenance, etc.

All of these variables directly affect the amount of compute resources that are available to the primary business application on a given cloud compute node.

Further, the infrastructure of the Public Cloud implementation could be sufficiently dynamic and unpredictable as to effect the creation of new computing instances simply because existing instances are resource constrained, precisely because of MUPCC. Organizations using Public Clouds need to investigate this concept to their satisfaction to ensure they are getting the resources they think they are and are getting billed according to the resources they use for their applications.

Conclusion

MUPCC should be considered when determining the ROI of a Cloud Computing infrastructure investment.

The term, “Morse Uncertainty Principle for Cloud Computing” is a copyright ©2011 of Paul Morse, Redmond Wa.

© Copyright 2014 Paul Morse