Researchers from the University of Minnesota have outlined a way to use “distributed voluntary resources – those donated by end-user hosts – to form nebulas” that would potentially complement today’s managed clouds from companies such as Amazon, IBM and Google. Nebulas could address the needs of service classes that more traditional clouds could not, providing more scalability, more geographical dispersion of nodes and lower cost.
CloudViews is a Hadoop HBase-supported common storage system being developed by researchers “to facilitate collaboration through protected inter-service data sharing”. Researchers say that public cloud providers must facilitate such collaboration – in the form of data driven, server side mashups – to ensure the market’s growth through development of new Web services.
Trusted Cloud Computing Platform
Researchers at the Max Planck Institute for Software Systems have outlined a Trusted Cloud Computing Platform that “enables Infrastructure-as-a-Service (IaaS) providers such as Amazon EC2 to provide a closed box execution environment that guarantees confidential execution of guest virtual machines”.
Private Virtual Infrastructure (PVI) and Locator Bot
University of Maryland, Baltimore County researcher, John Krautheim, proposes better sharing of the risk responsibility between the cloud provider and customer, giving the customer much more control than is typically the case. Components of this approach will include having a method for shutting down VMs if necessary and monitoring/auditing from within and outside the PVI.
Trading storage for computation
Researchers at the University of California, Santa Cruz, NetApp and Pergamum Systems are looking at the trade-offs between storing data and simply recalculating results as needed. Determining the best way to store and retrieve data requires a cost-benefits analysis based on insights from both the cloud operator and the data user because “neither has a completely informed view”. The nature of cloud computing could lend itself to storing information about the whereabouts and origins of data and then just recomputing results as needed.