Reaping Cloud By Ensuring Data Storage Security

The use of cloud computing has increased rapidly in many organizations. Cloud computing provides many benefits in terms of low cost and accessibility of data. Ensuring the security of cloud computing is a major factor in the cloud computing environment, as users often store sensitive information with cloud storage providers but these providers may be untrusted. Dealing with 'single cloud' providers is predicted to become less popular with customers due to risks of service availability failure and the possibility of malicious insiders in the single cloud. A movement towards 'multi-clouds', or in other words, 'interclouds' or 'cloud-of-clouds' has emerged recently. This paper surveys recent research related to single and multi-cloud security and addresses possible solutions. It is found that the research into the use of multi-cloud providers to maintain security has received less attention from the research community than has the use of single clouds. This work aims to promote the use of multi-clouds due to its ability to reduce security risks that affect the cloud computing user.

The Greek myths tell of creatures plucked from the surface of the Earth and en-shrined as constellations in the night sky. Something similar is happening today in the world of computing. Data and programs are being swept up from desktop PCs and corporate server rooms and installed in 'the compute cloud'. In general, there is a shift in the geography of computation.
What is cloud computing exactly? As a beginning here is a definition
'An emerging computer paradigm where data and services
reside in massively scalable data centres in the cloud and
can be accessed from any connected devices over the internet'
Like other definitions of topics like these, an understanding of the term cloud computing requires an understanding of various other terms which are closely related to this. While there is a lack of precise scientific definitions for many of these terms, general definitions can be given.

Cloud computing is an emerging paradigm in the computer industry where the computing is moved to a cloud of computers. It has become one of the buzz words of the industry. The core concept of cloud computing is, quite simply, that the vast computing resources that we need will reside somewhere out there in the cloud of computers and we'll connect to them and use them as and when needed.
Computing can be described as any activity of using and/or developing computer hardware and software. It includes everything that sits in the bottom layer, i.e. everything from raw compute power to storage capabilities. Cloud computing ties together all these entities and delivers them as a single integrated entity under its own sophisticated management. Cloud is a term used as a metaphor for the wide area networks (like internet) or any such large networked environment. It came partly from the cloud-like symbol used to represent the complexities of the networks in the schematic diagrams. It represents all the complexities of the network which may include everything from cables, routers, servers, data centres and all such other devices.
Computing started off with the mainframe era. There were big mainframes and everyone connected to them via 'dumb' terminals. This old model of business computing was frustrating for the people sitting at the dumb terminals because they could do only what they were 'authorized' to do. They were dependent on the computer administrators to give them permission or to fix their problems. They had no way of staying up to the latest innovations.
The personal computer was a rebellion against the tyranny of centralized computing operations. There was a kind of freedom in the use of personal computers. But this was later replaced by server architectures with enterprise servers and others showing up in the industry. This made sure that the computing was done and it did not eat up any of the resources that one had with him. All the computing was performed at servers. Internet grew in the lap of these servers. With cloud computing we have come a full circle. We come back to the centralized computing infrastructure. But this time it is something which can easily be accessed via the internet and something over which we have all the control.

1) Byzantine disk paxos: optimal resilience with Byzantine shared memory
AUTHORS: I. Abraham, G. Chockler, I. Keidar and D. Malkhi
We present Byzantine Disk Paxos, an asynchronous shared-memory consensus algorithm that uses a collection of n> 3t disks, t of which may fail by becoming non-responsive or arbitrarily corrupted. We give two constructions of this algorithm; that is, we construct two different t-tolerant (i.e., tolerating up to the disk failures) building blocks, each of which can be used, along with a leader oracle, to solve consensus. One building block is a t-tolerant wait-free shared safe register. The second building block is a t-tolerant regular register that satisfies a weaker termination (liveness) condition than wait freedom: its write operations are wait-free, whereas its read operations are guaranteed to return only in executions with a finite number of writes. We call this termination condition finite writes (FW), and show that wait-free consensus is solvable with FW-terminating registers and a leader oracle. We construct each of these ttolerant registers from n> 3t base registers, t of which can be non-responsive or Byzantine. All the previous t-tolerant wait-free constructions in this model used at least 4t + 1 fault-prone registers, and we are not familiar with any prior FW-terminating constructions in this model. We further show tight lower bounds on the number of invocation rounds required for optimal.
2) RACS: a case for cloud storage diversity
AUTHORS: H. Abu-Libdeh, L. Princehouse and H. Weatherspoon
The increasing popularity of cloud storage is leading organizations to consider moving data out of their own data centers and into the cloud. However, success for cloud storage providers can present a significant risk to customers; namely, it becomes very expensive to switch storage providers. In this paper, the authors make a case for applying RAID-like techniques used by disks and file systems, but at the cloud storage level. They argue that striping user data across multiple providers can allow customers to avoid vendor lock-in, reduce the cost of switching providers, and better tolerate provider outages or failures. They introduce RACS, a proxy that transparently spreads the storage load over many providers.
3) Database Management as a Service: Challenges and Opportunities
AUTHORS: D. Agrawal, A. El Abbadi, F. Emekci and A. Metwally
ata outsourcing or database as a service is a new paradigm for data management in which a third party service provider hosts a database as a service. The service provides data management for its customers and thus obviates the need for the service user to purchase expensive hardware and software, deal with software upgrades and hire professionals for administrative and maintenance tasks. Since using an external database service promises reliable data storage at a low cost it is very attractive for companies. Such a service would also provide universal access, through the Internet to private data stored at reliable and secure sites. A client would store their data, and not need to carry their data with them as they travel. They would also not need to log remotely to their home machines, which may suffer from crashes and be unavailable. However, recent governmental legislations, competition among companies, and database thefts mandate companies to use secure and privacy preserving data management techniques. The data provider, therefore, needs to guarantee that the data is secure, be able to execute queries on the data, and the results of the queries must also be secure and not visible to the data provider. Current research has been focused only on how to index and query encrypted data. However, querying encrypted data is computationally very expensive. Providing an efficient trust mechanism to push both database service providers and clients to behave honestly has emerged as one of the most important problem before data outsourcing to become a viable paradigm. In this paper, we describe scalable privacy preserving algorithms for data outsourcing. Instead of encryption, which is computationally expensive, we use distribution on multiple data provider sites and information theoretically proven secret sharing algorithms as the basis for privacy preserving outsourcing. The technical contributions of this paper is the establishment and development of a framework for efficient fault-tolerant scalable and theoretically secure privacy preserving data outsourcing that supports a diversity of database operations executed on different types of data, which can even leverage publicly available data sets.
4) Using Multi Shares for Ensuring Privacy in Database-as-a-Service
AUTHORS: M.A. AlZain and E. Pardede
Database-as-a-service (DAAS) is a new model for data management, where a service provider offers customers software management functionalities as well as the use of expensive hardware. This service enables data integration and access on a large scale in cloud computing infrastructures. Addressing data privacy in DAAS is considered a significant issue for any organizational database. Due to the fact that data will be shared with a third party, an un-trusted server is dangerous and unsafe for the user. This paper proposes the architecture of a new model appropriate for NetDB2 architecture, known as NetDB2 Multi-Shares (NetDB2-MS). It is based on multi-service providers and a secret sharing algorithm instead of encryption, which is used by the existing NetDB2 service. The evaluation is done through simulations. It shows a significant improvement in performance for data storage and retrieval for various query types.
5) DepSky: dependable and secure storage in a cloud-of-clouds
AUTHORS: A. Bessani, M. Correia, B. Quaresma, F. Andr?? and P. Sousa
The increasing popularity of cloud storage services has lead companies that handle critical data to think about using these services for their storage needs. Medical record databases, power system historical information and financial data are some examples of critical data that could be moved to the cloud. However, the reliability and security of data stored in the cloud still remain major concerns. In this paper we present DepSky, a system that improves the availability, integrity and confidentiality of information stored in the cloud through the encryption, encoding and replication of the data on diverse clouds that form a cloud-of-clouds. We deployed our system using four commercial clouds and used PlanetLab to run clients accessing the service from different countries. We observed that our protocols improved the perceived availability and, in most cases, the access latency when compared with cloud providers individually. Moreover, the monetary costs of using DepSky on this scenario is twice the cost of using a single cloud, which is optimal and seems to be a reasonable cost, given the benefits.

Cloud computing is Internet ("cloud") based development and use of computer technology ("computing"). It is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them [11].
The concept incorporates infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) as well as Web 2.0 and other recent technology trends which have the common theme of reliance on the Internet for satisfying the computing needs of the users. Examples of SaaS vendors include and Google Apps which provide common business applications online that are accessed from a web browser, while the software and data are stored on the servers.
The term cloud is used as a metaphor for the Internet, based on how the Internet is depicted in computer network diagrams, and is an abstraction for the complex infrastructure it conceals.

Fig. 1: Cloud Computing Overview
Cloud computing is often confused with grid computing ("a form of distributed computing whereby a 'super and virtual computer' is composed of a cluster of networked, loosely-coupled computers, acting in concert to perform very large tasks"), utility computing (the "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility such as electricity") and autonomic computing ("computer systems capable of self-management").
Indeed many cloud computing deployments as of 2009 depend on grids, have autonomic characteristics and bill like utilities ' but cloud computing can be seen as a natural next step from the grid-utility model. Some successful cloud architectures have little or no centralized infrastructure or billing systems whatsoever, including peer-to-peer networks like Bit Torrent and Skype and volunteer computing like.
The majority of cloud computing infrastructure as of 2009 consists of reliable services delivered through data centers and built on servers with different levels of virtualization technologies. The services are accessible anywhere that has access to networking infrastructure. The Cloud appears as a single point of access for all the computing needs of consumers. Commercial offerings need to meet the quality of service requirements of customers and typically offer service level agreements. Open standards are critical to the growth of cloud computing and open source software has provided the foundation for many cloud computing implementations.
As customers generally do not own the infrastructure, they merely access or rent, they can avoid capital expenditure and consume resources as a service, paying instead for what they use. Many cloud-computing offerings have adopted the utility computing model, which is analogous to how traditional utilities like electricity are consumed, while others are billed on a subscription basis. Sharing "perishable and intangible" computing power among multiple tenants can improve utilization rates, as servers are not left idle, which can reduce costs significantly while increasing the speed of application development. A side effect of this approach is that "computer capacity rises dramatically" as customers do not have to engineer for peak loads. Adoption has been enabled by "increased high-speed bandwidth" which makes it possible to receive the same response times from centralized infrastructure at other sites.
Cloud computing users can avoid capital expenditure (CapEx) on hardware, software and services, rather paying a provider only for what they use. Consumption is billed on a utility (e.g. resources consumed, like electricity) or subscription (e.g. time based, like a newspaper) basis with little or no upfront cost. Other benefits of this time sharing style approach are low barriers to entry, shared infrastructure and costs, low management overhead and immediate access to a broad range of applications. Users can generally terminate the contract at any time (thereby avoiding return on investment risk and uncertainty) and the services are often covered by service level agreements with financial penalties.
According to Nicholas Carr the strategic importance of information technology is diminishing as it becomes standardized and cheaper. He argues that the cloud computing paradigm shift is similar to the displacement of electricity generators by electricity grids early in the 20th century.

Fig. 2: Cloud Computing Economics

Providers including Amazon, Microsoft, Google, Sun and Yahoo exemplify the use of cloud computing. It is being adopted by individual users through large enterprises including General Electric, L'Or??al, and Procter & Gamble.

The Cloud is a term with a long history in telephony, which has in the past decade, been adopted as a metaphor for internet based services, with a common depiction in network diagrams as a cloud outline.
The underlying concept dates back to 1960 when John McCarthy opined that "computation may someday be organized as a public utility"; indeed it shares characteristics with service bureaus which date back to the 1960s. The term cloud had already come into commercial use in the early 1990s to refer to large ATM networks. By the turn of the 21st century, the term "cloud computing" had started to appear, although most of the focus at this time was on Software as a service (SaaS).
In 1999, was established by Marc Benioff, Parker Harris, and his fellows. They applied many technologies of consumer web sites like Google and Yahoo! to business applications. They also provided the concept of "On demand" and "SaaS" with their real business and successful customers. The key for SaaS is being customizable by customer alone or with a small amount of help. Flexibility and speed for application development have been drastically welcomed and accepted by business users.
IBM extended these concepts in 2001, as detailed in the Autonomic Computing Manifesto -- which described advanced automation techniques such as self-monitoring, self-healing, self-configuring, and self-optimizing in the management of complex IT systems with heterogeneous storage, servers, applications, networks, security mechanisms, and other system elements that can be virtualized across an enterprise. played a key role in the development of cloud computing by modernizing their data centers after the dot-com bubble and, having found that the new cloud architecture resulted in significant internal efficiency improvements, providing access to their systems by way of Amazon Web Services in 2005 on a utility computing basis.
2007 saw increased activity, with Google, IBM, and a number of universities embarking on a large scale cloud computing research project, around the time the term started gaining popularity in the mainstream press. It was a hot topic by mid-2008 and numerous cloud computing events had been scheduled.
In August 2008, Gartner Research observed that "organizations are switching from company-owned hardware and software assets to per-use service-based models" and that the "projected shift to cloud computing will result in dramatic growth in IT products in some areas and in significant reductions in other areas."

What could we do with 1000 times more data and CPU power? One simple question. That's all it took the interviewers to bewilder the confident job applicants at Google. This is a question of relevance because the amount of data that an application handles is increasing day by day and so is the CPU power that one can harness.
There are many answers to this question. With this much CPU power, we could scale our businesses to 1000 times more users. Right now we are gathering statistics about every user using an application. With such CPU power at hand, we could monitor every single user click and every user interaction such that we can gather all the statistics about the user. We could improve the recommendation systems of users. We could model better price plan choices. With this CPU power we could simulate the case where we have say 1,00,000 users in the system without any glitches.
There are lots of other things we could do with so much CPU power and data capabilities. But what is keeping us back. One of the reasons is the large scale architecture which comes with these are difficult to manage. There may be many different problems with the architecture we have to support. The machines may start failing, the hard drives may crash, the network may go down and many other such hardware problems. The hardware has to be designed such that the architecture is reliable and scalable. This large scale architecture has a very expensive upfront and has high maintenance costs. It requires different resources like machines, power, cooling, etc. The system also cannot scale as and when needed and so is not easily reconfigurable.
The resources are also constrained by the resources. As the applications become large, they become I/O bound. The hard drive access speed becomes a limiting factor. Though the raw CPU power available may not be a factor, the amount of RAM available clearly becomes a factor. This is also limited in this context. If at all the hardware problems are managed very well, there arises the software problems. There may be bugs in the software using this much of data. The workload also demands two important tasks for two completely different people. The software has to be such that it is bug free and has good data processing algorithms to manage all the data.
The cloud computing works on the cloud - so there are large groups of often low-cost servers with specialized connections to spread the data-processing chores among them. Since there are a lot of low-cost servers connected together, there are large pools of resources available. So these offer almost unlimited computing resources. This makes the availability of resources a lesser issue.
The data of the application can also be stored in the cloud. Storage of data in the cloud has many distinct advantages over other storages. One thing is that data is spread evenly through the cloud in such a way that there are multiple copies of the data and there are ways by which failure can be detected and the data can be rebalanced on the fly. The I/O operations become simpler in the cloud such that browsing and searching for something in 25GB or more of data becomes simpler in the cloud, which is nearly impossible to do on a desktop.
The cloud computing applications also provide automatic reconfiguration of the resources based on the service level agreements. When we are using applications out of the cloud, to scale the application with respect to the load is a mundane task because the resources have to be gathered and then provided to the users. If the load on the application is such that it is present only for a small amount of time as compared to the time its working out of the load, but occurs frequently, then scaling of the resources becomes tedious. But when the application is in the cloud, the load can be managed by spreading it to other available nodes by making a copy of the application on to them. This can be reverted once the load goes down. It can be done as and when needed. All these are done automatically such that the resources maintain and manage themselves.

Cost: is greatly reduced and capital expenditure is converted to operational expenditure. This lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and minimal or no IT skills are required for implementation.
Device and location independence: enable users to access systems using a web browser regardless of their location or what device they are using, e.g., PC, mobile. As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet the users can connect from anywhere.
Multi-tenancy: enables sharing of resources and costs among a large pool of users, allowing for:
' Centralization of infrastructure in areas with lower costs (such as real estate, electricity, etc.)
' Peak-load capacity increases (users need not engineer for highest possible load-levels)
' Utilization and efficiency improvements for systems that are often only 10-20% utilized.
Reliability: improves through the use of multiple redundant sites, which makes it suitable for business continuity and disaster recovery. Nonetheless, most major cloud computing services have suffered outages and IT and business managers are able to do little when they are affected.
Scalability: via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads. Performance is monitored and consistent and loosely-coupled architectures are constructed using web services as the system interface.
Security: typically improves due to centralization of data, increased security-focused resources, etc., but raises concerns about loss of control over certain sensitive data. Security is often as good as or better than traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. Providers typically log accesses, but accessing the audit logs themselves can be difficult or impossible.
Sustainability: comes about through improved resource utilization, more efficient systems, and carbon neutrality. Nonetheless, computers and associated infrastructure are major consumers of energy.

Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud computing, comprises hardware and software designed by a cloud architect who typically works for a cloud integrator[12]. It typically involves multiple cloud components communicating with each other over application programming interfaces, usually web services.
This closely resembles the UNIX philosophy of having multiple programs doing one thing well and working together over universal interfaces. Complexity is controlled and the resulting systems are more manageable than their monolithic counterparts.
Cloud architecture extends to the client, where web browsers and/or software applications access cloud applications.
Cloud storage architecture is loosely coupled, where metadata operations are centralized enabling the data nodes to scale into the hundreds, each independently delivering data to applications or user[1].

Fig. 3: Cloud Architecture

There are five layers in cloud computing model, the Client Layer, Application Layer, Platform Layer, Infrastructure Layer and Server Layer [17]. In order to address the security problem, every level should have security implementation.
Client Layer :
In the cloud computing model, the cloud client consist of the computer hardware and the software that is totally based on the applications of the cloud services and basically designed on such way that it provides application delivery to the multiple servers at the same time, as some computers making use of the various devices which includes computers, phones, operating systems, browsers and other devices.
Application Layer:
The cloud application services deliver software as a service over the internet for eliminating the need to install and run the application on the customer own computers using the simplified maintenance and support for the people which will use the cloud interchangeably for the network based access and management of software by controlling the activities which is managed in the central locations by enabling customers to access the application remotely with respect to web and application software are also delivered to many model instances that includes the various standards that is price, partnership and management characteristics which provides the updates for the centralize features.
Platform Layer :
In the cloud computing, the platform services provides the common computing platform and the stack solution which is often referred as the cloud infrastructure and maintaining the cloud applications that deploys the applications without any cost and complexity of the buying and managing the hardware and software layers.
Infrastructure Layer :
The Cloud infrastructure services delivers the platform virtualization which shows only the desired features and hides the other ones using the environment in which servers, software or network equipment are fully outsourced as the utility computing which will based on the proper utilization of the resources by using the principle of reusability that includes the virtual private server offerings for the tier 3 data centre and many tie 4 attributes which is finally assembled up to form the hundreds of the virtual machines.
Server Layer :
The server layer also consist of the computation hardware and software support for the cloud service which is based on the multi-core processor and cloud specific operating systems and coined offerings.

Fig. 4: Cloud Types
Public cloud
Public cloud or external cloud describes cloud computing in the traditional mainstream sense, whereby resources are dynamically provisioned on a fine-grained, self-service basis over the Internet, via web applications/web services, from an off-site third-party provider who shares resources and bills on a fine-grained utility computing basis[1].
Private cloud
Private cloud and internal cloud are neologisms that some vendors have recently used to describe offerings that emulate cloud computing on private networks. These products claim to "deliver some benefits of cloud computing without the pitfalls", capitalizing on data security, corporate governance, and reliability concerns.
While an analyst predicted in 2008 that private cloud networks would be the future of corporate IT, there is some uncertainty whether they are a reality even within the same firm. Analysts also claim that within five years a "huge percentage" of small and medium enterprises will get most of their computing resources from external cloud computing providers as they "will not have economies of scale to make it worth staying in the IT business" or be able to afford private clouds.
The term has also been used in the logical rather than physical sense, for example in reference to platform as service offerings, though such offerings including Microsoft's Azure Services Platform are not available for on-premises deployment.
Hybrid cloud
A hybrid cloud environment consisting of multiple internal and/or external providers "will be typical for most enterprises".
' Lower computer costs:
' You do not need a high-powered and high-priced computer to run cloud computing web-based applications.
' Since applications run in the cloud, not on the desktop PC, your desktop PC does not need the processing power or hard disk space demanded by traditional desktop software.
' When you are using web-based applications, your PC can be less expensive, with a smaller hard disk, less memory, more efficient processor...
' In fact, your PC in this scenario does not even need a CD or DVD drive, as no software programs have to be loaded and no document files need to be saved.
' Improved performance:
' With few large programs hogging your computer's memory, you will see better performance from your PC.
' Computers in a cloud computing system boot and run faster because they have fewer programs and processes loaded into memory.
' Reduced software costs:
' Instead of purchasing expensive software applications, you can get most of what you need for free-ish!
' most cloud computing applications today, such as the Google Docs suite.
' better than paying for similar commercial software
' which alone may be justification for switching to cloud applications.
' Instant software updates:
' Another advantage to cloud computing is that you are no longer faced with choosing between obsolete software and high upgrade costs.
' When the application is web-based, updates happen automatically
' available the next time you log into the cloud.
' When you access a web-based application, you get the latest version
' without needing to pay for or download an upgrade.
' Improved document format compatibility:
' You do not have to worry about the documents you create on your machine being compatible with other users' applications or OS's
' There are potentially no format incompatibilities when everyone is sharing documents and applications in the cloud.
' Unlimited storage capacity:
' Cloud computing offers virtually limitless storage.
' Your computer's current 1 T byte hard drive is small compared to the hundreds of P bytes available in the cloud.
' Increased data reliability:
' Unlike desktop computing, in which if a hard disk crashes and destroy all your valuable data, a computer crashing in the cloud should not affect the storage of your data.
' if your personal computer crashes, all your data is still out there in the cloud, still accessible
' In a world where few individual desktop PC users back up their data on a regular basis, cloud computing is a data-safe computing platform.
' Universal document access:
' That is not a problem with cloud computing, because you do not take your documents with you.
' Instead, they stay in the cloud, and you can access them whenever you have a computer and an Internet connection
' Documents are instantly available from wherever you are
' Latest version availability:
' When you edit a document at home, that edited version is what you see when you access the document at work.
' The cloud always hosts the latest version of your documents
' Easier group collaboration:
' Sharing documents leads directly to better collaboration.
' Many users do this as it is an important advantages of cloud computing
' multiple users can collaborate easily on documents and projects
' Device independence:
' You are no longer tethered to a single computer or network.
' Changes to computers, applications and documents follow you through the cloud.
' Move to a portable device, and your applications and documents are still available.

' Requires a constant Internet connection:
' Cloud computing is impossible if you cannot connect to the Internet.
' Since you use the Internet to connect to both your applications and documents, if you do not have an Internet connection you cannot access anything, even your own documents.
' A dead Internet connection means no work and in areas where Internet connections are few or inherently unreliable, this could be a deal-breaker.
' Does not work well with low-speed connections:
' Similarly, a low-speed Internet connection, such as that found with dial-up services, makes cloud computing painful at best and often impossible.
' Web-based applications require a lot of bandwidth to download, as do large documents.
' Features might be limited:
' This situation is bound to change, but today many web-based applications simply are not as full-featured as their desktop-based applications.
' Can be slow:
' Even with a fast connection, web-based applications can sometimes be slower than accessing a similar software program on your desktop PC.
' Everything about the program, from the interface to the current document, has to be sent back and forth from your computer to the computers in the cloud.
' If the cloud servers happen to be backed up at that moment, or if the Internet is having a slow day, you would not get the instantaneous access you might expect from desktop applications.
' Stored data might not be secure:
' With cloud computing, all your data is stored on the cloud.
' The questions is How secure is the cloud?
' Can unauthorized users gain access to your confidential data.
' Stored data can be lost:
' Theoretically, data stored in the cloud is safe, replicated across multiple machines.
' But on the off chance that your data goes missing, you have no physical or local backup.

As more companies move to cloud computing, look for hackers to follow. Some of the potential attack vectors criminals may attempt include [17]:
' Denial of Service (DoS) attacks :
Some security professionals have argued that the cloud is more vulnerable to DoS attack, because it is shared by many users, which makes DoS attack much more damaging.

' Side Channel attacks :
An attacker could attempt to compromise the cloud by placing a malicious virtual machine in close proximity to a target cloud server and then launching a side channel attack.

' Authentication attack :
Authentication is a weak point in hosted and virtual services and is frequently targeted. There are many different ways to authenticate users; for example, based on what a person knows, has or is. The mechanism used to secure the authentication process and the methods used are a frequent target of attackers.

' Man in the middle cryptographic attacks :
This attack is carried out when an attacker places himself between two users. Anytime attackers can place themselves in the communication's path, there is the possibility that they can intercept and modify communications.

' Inside-Job :
This kind of attack is when the person, employee or staffs who is knowledgeable of how the system runs, from client to server then he can implant malicious codes to destroy everything in the cloud system.

Security measures assumed in the cloud must be made available to the customers to gain their trust. There is always a possibility that the cloud infrastructure is secured with respect to some requirements and the customers are looking for a different set of security. The important aspect is to see that the cloud provider meets the security requirements of the application and this can be achieved only through 100% transparency. Open cloud Manifesto exerts stress in transparency in clouds, due the consumer's apprehensions to host their applications on a shared infrastructure, on which they do not have any control in order to have a secured cloud computing deployment, we must consider the following areas, the cloud computing architecture, Governance, portability and interoperability, traditional security, business continuity and disaster recovery, data centre operations, incident response, notification and remediation, Application security, Encryption and Key Management, identity and access management. One if the reason why users are very anxious of the safety of their data being saved in the cloud is that they don't know who is managing it while in the server of the cloud computing service provider. Typical users who user the cloud computing service like storing their files in the sever to access it anywhere they want through internet, don't bother much about the security of their files, those documents are common files that don't need to be secured. But in the case of big companies which have very important information to take care of, they need to have secured cloud computing system [5].
In order to have secure cloud system, the following aspect must be considered:
Is the process of verifying a user or other entity's identity. This is typically done to permit someone or something to perform task. There is variety of authentication system, some are stronger than others. A strong authentication system ensures that the authenticators and messages of the actual authentication protocol are not exchanged in a manner that makes them vulnerable to being hijacked by an intermediate malicious node or person. That is, the information used to generate a proof of identity should not be exposed to anyone other than the person or machine it is intended for.
Is when the system decides whether or not a certain entity is allowed to perform a requested task .This decision is made after authentication the identity in question. When considering an authentication system for a particular application, it is crucial to understand the type of identifier required to provide a certain level of authorization.
Confidentiality is needed when the message sent contains sensitive material that should not be ready by others and therefore must not be sent in a comprehensible format. A loss of confidentiality is the unauthorized disclosure of information. Confidentiality, as it relates to security and encryption techniques can be obtained by encrypting messages such that only intended recipient are able to read them.
Integrity is ensuring that the data presented are true and valid master source of the data and includes guarding against improper information modification or destruction to ensure information non-repudiation and authenticity. A loss of integrity is the unauthorized modification, insertion, or destruction of information.
One way of ensuring of data integrity is by using simple checksums which prevent an attacker from forgoing or replying messages. Checksum is usually implemented when the channel between communication parties is not secure and ensure that the data has reached its destination with all bits intact, if bits have been modified, that the modification will not go unobserved.
Non repudiation is ensuring that a traceable legal record is kept and has not been changed by a malicious entity. A loss on non-repudiation would result in the questioning of the transaction that has occurred. A simple example of non-repudiation is signing contract. The signer cannot claim they did not agree a contract because there is an evidence that they is an evidence that they did agree. The difference is that a signature can be forger but good encryption cannot.

In different cloud service models, the security responsibility among users and providers is different. According to Amazon network, their EC2 addresses security control in relation to physical, and virtualization security, environmental, whereas, the users remain responsible for addressing security control of the IT system including the operating systems, applications and data [15].
Data Integrity:
It is not an easy task to securely maintain all essential data where it has the need in many applications for clients in cloud computing. To maintain our data in cloud computing, it may not be fully trustworthy because client doesn't have copy of all stored data. We have to begin new proposed system for this using our data reading protocol algorithm to check the integrity of data before and after the data insertion in cloud [2]. Here the security of data earlier than and following is checked by client with the help of CSP using our "effective automatic data reading protocol from user as well as cloud level into the cloud" with truthfulness.
Data Intrusion:
The importance of data intrusion detection systems in a cloud computing environment, We find out how intrusion detection is performed on Software as a Service, Platform as a Service and Infrastructure as Service offerings, along with the available host, network and hypervisor based intrusion detection options. Attacks on systems and data are a reality in the world we live in [2]. Detecting and responding to those attacks has become the norm and is considered due diligence when it comes to security.
Service Availability:
Service availability is most significant in the cloud computing security. Amazon previously mentions in its authorizing agreement that it is possible that the service might be unavailable from time to time. The user's web service may conclude for any reason at any time if any users files break the cloud storage policy. In accumulation, if any damage occurs to any Amazon web service and the service fails, in this casing there will be no charge to the Amazon Company for this failure. Companies seeking to protect services from such failure need measures such as backups or use of multiple providers.


DepSky System: Multi-Clouds Model

The term 'multi-clouds' is similar to the terms 'interclouds' or 'cloud-of-clouds' [12]. These terms suggest that cloud computing should not end with a single cloud. Using their design, a cloudy sky incorporates unlike colors and shapes of clouds which lead to different implementations and administrative domains. In the proposed system Bessani present a virtual storage cloud system called DepSky which consists of a combination of different clouds to build a cloud-of-clouds. The DepSky system addresses the availability and the confidentiality of data in their storage system by using multi-cloud providers, combining Byzantine quorum system protocols, cryptographic secret sharing and erasure codes.
DepSky Architecture and Data Model
The DepSky architecture consists of four clouds and every cloud uses its own particular interface. The DepSky algorithm exists in the client's machines as a software library to communicate with each cloud. These four clouds are storage clouds, so here no codes to be executed. The DepSky library authorizes reading and writing operations with the storage clouds. The use of diverse clouds requires the DEPSKY library to deal with the heterogeneity of the interfaces of each cloud provider. An aspect that is especially important is the format of the data accepted by each cloud. The data model allows us to ignore these details when presenting the algorithms.
DepSky Data model:
As the DepSky system contract with various cloud providers, the DepSky library deals with various cloud interface providers and as a result, the data format is acknowledged by each cloud. The DepSky data models consist of three abstraction levels: the conceptual data unit, a generic data unit, and the information unit implementation [15].

Fig. 5: DepSky Architecture
The DEPSKY data model with its three abstraction levels. In the first (left), there is the conceptual data unit, which communicates to the basic storage object with which the algorithms work a register in distributed computing. A data unit has a exclusive name, a version number, verification data and the data stored on the data unit object. In the second level (middle), the theoretical data unit is implemented as a generic data unit in an abstract storage cloud. Each basic data unit, or container, holds two types of files: a signed metadata file and the files that store up the data. Metadata files hold the version number and the verification data, jointly with other information's that applications may demand. Notice that a data unit can store more than a few versions of the data, i.e., the container can hold several data files. The name of the metadata file is simply metadata, while the data files are called value<Version>, where <Version> is the version number of the data (e.g., value1, value2, etc.). Finally, in the third level (right) there is the data unit implementation, i.e., the container translated into the specific constructions supported by each cloud provider.


Cloud providers should address privacy and security issues as a matter of high and urgent priority. Dealing with 'single cloud' providers is becoming less popular with customers due to potential problems such as service availability failure and the possibility that there are malicious insiders in the single cloud. In recent years, there has been a move towards 'multi-clouds', 'inter-cloud' or 'cloud-of-clouds'.

Disadvantages of Existing System:
1. Cloud providers should address privacy and security issues as a matter of high and urgent priority.
2. Dealing with 'single cloud' providers is becoming less popular with customers due to potential problems such as service availability failure and the possibility that there are malicious insiders in the single cloud.

This paper focuses on the issues related to the data security aspect of cloud computing. As data and information will be shared with a third party, cloud computing users want to avoid an un-trusted cloud provider. Protecting private and important information, such as credit card details or a patient's medical records from attackers or malicious insiders is of critical importance. In addition, the potential for migration from a single cloud to a multi-cloud environment is examined and research related to security issues in single and multi-clouds in cloud computing is surveyed.

Advantages of Proposed System:
1. Data Integrity
2. Service Availability
3. The user runs custom applications using the service provider's resources
4. Cloud service providers should ensure the security of their customer's data and should be responsible if any security risk affects their customer's service infrastructure.


Module Description:

Data Integrity:
One of the most important issues related to cloud security risks is data integrity. The data stored in the cloud may suffer from damage during transition operations from or to the cloud storage provider. Cachinet al. give examples of the risk of attacks from both inside and outside the cloud provider, such as the recently attacked Red Hat Linux's distribution servers [15].
One of the solutions that they propose is to use a Byzantine fault-tolerant replication protocol within the cloud. Hendricks et al. State that this solution can avoid data corruption caused by some components in the cloud. However, Cachinet al. Claim that using the Byzantine fault tolerant replication protocol within the cloud is unsuitable due to the fact that the servers belonging to cloud providers use the same system installations and are physically located in the same place.

Data Intrusion:
According to Garfinkel, another security risk that may occur with a cloud provider, such as the Amazon cloud service, is a hacked password or data intrusion. If someone gains access to an Amazon account password, they will be able to access all of the account's instances and resources. Thus the stolen password allows the hacker to erase all the information inside any virtual machine instance for the stolen user account, modify it, or even disable its services. Furthermore, there is a possibility for the user's email(Amazon user name) to be hacked (see for a discussion of the potential risks of email), and since Amazon allows a lost password to be reset by email, the hacker may still be able to log in to the account after receiving the new reset password.

Service Availability:
Another major concern in cloud services is service availability. Amazon mentions in its licensing agreement that it is possible that the service might be unavailable from time to time. The user's web service may terminate for any reason at any time if any user's files break the cloud storage policy. In addition, if any damage occurs to any Amazon web service and the service fails, in this case there will be no charge to the Amazon Company for this failure. Companies seeking to protect services from such failure need measures such as backups or use of multiple providers.

DepSKy System Model:
The DepSky system model contains three parts: readers, writers, and four cloud storage providers, where readers and writers are the client's tasks. Bessani et al. explain the difference between readers and writers for cloud storage. Readers can fail arbitrarily (for example, they can fail by crashing, they can fail from time to time and then display any behavior) whereas, writers only fail by crashing [15].
RSA Algorithm: RSA is a block cipher, in which every message is mapped to an integer. RSA consists of Public -Key and Private-Key. In our Cloud environment, Pubic-Key is known to all, whereas Private-Key is known only to the user who originally owns the data. Thus, encryption is done by the Cloud service provider and decryption is done by the Cloud user or consumer. Once the data is encrypted with the Public-Key, it can be decrypted with the corresponding Private-Key only [16].
RSA Algorithm Involves Three Steps
' Key Generation
' Encryption
' Decryption
Key Generation
Before the data is encrypted, Key generation should be done. This process is done between the Cloud service provider and the user. Steps are:-
' Choose two distinct prime numbers a and b. For security purposes, the
integers a and b should be chosen at random and should be of similar
bit length.
' Compute n = a * b.
' Compute Euler's totient function, ??(n) = (a-1) * (b-1).
' Chose an integer e, such that 1 < e < ??(n) and greatest common divisor
of e , ??(n) is 1. Now e is released as Public-Key exponent.
' Now determine d as follows: d = e-1(mod ??(n)) i.e., d is multiplicate
inverse of e mod ??(n).
' d is kept as Private-Key component, so that d * e = 1 mod ??(n).
' The Public-Key consists of modulus n and the public exponent e i.e, (e,
' The Private-Key consists of modulus n and the private exponent d,
which must be kept secret i.e, (d, n).
Encryption is the process of converting original plain text (data) into cipher text (data). Steps :-
' Cloud service provider should give or transmit the Public-Key (n, e) to
The user who want to store the data with him or her.
' User data is now mapped to an integer by using an agreed upon
reversible protocol, known as padding scheme.
' Data is encrypted and the resultant cipher text(data) C is C = me (mod
' This cipher text or encrypted data is now stored with the Cloud service
Decryption is the process of converting the cipher text (data) to the original plain text (data). Steps:-
' The cloud user requests the Cloud service provider for the data.
' Cloud service provider verify's the authenticity of the user and gives
the encrypted data i.e, C.
' The Cloud user then decrypts the data by computing, m= Cd (mod n).
' Once m is obtained, the user can get back the original data by
reversing the padding scheme.
Secret Sharing Algorithm:
Data stored in the cloud can be compromised or lost. So, we have to come up with a way to secure those files. We can encrypt them before storing them in the cloud, which sorts out the disclosure aspects. However, what if the data is lost due to some catastrophe befalling the cloud service provider? We could store it on more than one cloud service and encrypt it before we send it off. Each of them will have the same file. What if we use an insecure, easily guessable password . I have often thought that secret sharing algorithms could be employed to good effect in these circumstances instead [9].
Simply storing the data on multiple clouds solves the problem of data availability, but what about security? If there are multiple copies of the data, it will just open more doors for the intruder to hack in. Thus there needs to be a way in which we can make sure that the data over multiple clouds is safe, or safer than it was in a single cloud. This is when we can apply the Secret sharing algorithm presented by, Adi Shamir elaborated in. Invented in 1979, the algorithm has occupied a huge place in the area of cryptography. A somewhat similar algorithm was discovered by George Blakley, its use is depicted in. Its performance is more or less the same, but the mathematical evolution is more complicated. That's where the beauty of Shamir's secret sharing algorithm lies ' in its simplicity of implementation.

Basic Principle
The basic idea behind secret sharing algorithm is, when we want to secure a certain data 'D', we divide it into n parts say D1, D2',Dn in such a way that:

' The Knowledge of any k or few Di pieces makes D easily computable.
' The Knowledge of any k-1 or fewer Di pieces leaves D completely undetermined (in the sense that all its possible values are equally likely). This scheme is called (k,n) threshold scheme[10]. The value of factor K can be decided depending on the level of security we desire. For example, if the data is of top most priority such as bank account password or transaction ids we can keep k=n. In such a case all participants will be required to reconstruct the secret original data.

Mathematical Implementation
The mathematical implementation of Secret Sharing algorithm can be understood with the help of a simple example as given by Md Kausar et al. in. The generalized idea is as follows [10]:

' We choose at random (k-1) coefficients i.e. a1'ak-1
' We divide our secret data 'S' by picking a random degree polynomial
q(x) = a0+a1x+a2x2+'+ak-1xk-1 Where a0='S' (i.e. the data).

Now if we wish to divide the data into n parts, we will substitute 'n' different values of x in the polynomial q(x) and obtain 'n' such sets of (x, y), here y is nothing but our polynomial q(x).
The essential idea of Adi Shamir's threshold scheme is that 2 points are sufficient to define a line, 3 points are sufficient to define a parabola, 4 points to define a cubic curve and so forth. That is, it takes 'k' points to define a polynomial of degree 'k-1'.
Select 'k' such sets, any k combination of the available n parts will generate the same result. The value in these sets are meaningless alone, it is only when 'k' sets are brought in together and further worked upon that we get our secret back. These 'k' instances of original polynomial are processed using Lagrange polynomials.
The Lagrange basis is:
l0 = x - x1 . x - x2 / x0 - x1 . x0 - x2
l1 = x - x0 . x - x2 / x1 - x0 . x1 - x2
l2 = x - x0 . x - x1 / x2 - x0 . x2 - x1
Substitute the values of x from the selected 'k' sets into the Lagrange basis and we obtain 'k' fractional equations for the same. Finally on taking summation of the equations obtained from Lagrange basis and y form the selected 'k' sets, we get back our original polynomial. The summation can be represented mathematically as:

The above explanation helps in understanding the working of the secret sharing algorithm. When done manually the entire calculation can be done in minutes, while on implementation, as the microprocessor technology has elevated its level to a new high, thousands of such calculations can be done in seconds.

Properties of Secret Sharing Algorithm:

The secret sharing algorithm possesses some dynamic properties that make it further more powerful, these properties, as described by Adi shamir in are as follows:
' The size of each piece does not exceed the size of the original data.
' When k is kept fixed, D pieces can be dynamically added or deleted without affecting the other Di pieces.
' It is easy to change the Di pieces without changing the original data D - all we need is a new polynomial q(x) with the same free term. This can enhance security.
' By using tuples of polynomial values as Di pieces, we can get a hierarchical scheme in which the number of pieces needed to determine D depends on their importance.

The overall performance of the proposed algorithm is compared with the existing encryption techniques. Several parameters like 1) Encryption time 2) throughput 3) Decryption time are considered. Comparison of proposed and existing algorithm is shown in figure 6, figure 7 & figure 8.
Form literature survey it is clear that the Blowfish algorithm is superior in term of encryption time, decryption and throughput. But from the experimental result it is clear that the proposed method is taking a bit less time in comparison of Blowfish. We conducted this experiment and we find that the proposed method is better option due it is simplicity and security.

Fig. 6: Comparison of encryption time
In figure 6 throughputs of all these algorithms has been plotted and observed that the proposed algorithm has better throughput as compared to other algorithms.

Fig. 7: Comparison of throughput
Comparison of decryption time of algorithms is shown in the figure which shows that the proposed algorithm is as good as blowfish algorithm.

Fig. 8: Comparison of decryption time
Figure 7 shows the visual representation of different encryption algorithm's key sizes. It has to be noted that our algorithm occupies the lower position in the key size. Though the key size is less, it provides an equivalent security to the information in a distributed environment. Our method provides an equal challenge as other encryption algorithms with less number of bits used for the secret key. The proposed encryption technique is less computational and provides higher speed in a secure manner.

On the basis of above result analysis we can say that the proposed algorithm is lightweight in terms of encryption time, decryption time and throughput. Due to these properties our algorithm can be good choice for uploading of encrypted file on remote server and downloading of file from the servers.



Fig. 9: Workflow Diagram


Fig .10: Dataflow Diagram


Fig. 11: Usecase Diagram

Fig. 12: Sequence Diagram


Fig. 13: Class Diagram


Fig. 14: Activity Diagram

H/W System Configuration:-

' Processor - Pentium 'III
' Speed - 1.1 Ghz
' RAM - 256 MB(min)
' Hard Disk - 20 GB
' Floppy Drive - 1.44 MB
' Key Board - Standard Windows Keyboard
' Mouse - Two or Three Button Mouse
' Monitor - SVGA
S/W System Configuration:-

Operating System : Windows95/98/2000/XP
Application Server :Tomcat5.0/6.X or NETBEANS 7.0.1
Front End : HTML, Java, JSP
Script : JavaScript.
Server side Script : Java Server Pages.








Now a days the use of cloud computing has speedily increased and cloud computing security is still measured the major issue in the cloud computing atmosphere. Customers do not want to misplace their private data as a consequence of malicious insiders in the cloud. The loss of service availability has caused many problems for a large number of customers in recent times. Additionally, data intrusion leads to many troubles has caused many problems for a large number of customers in recent times. Additionally, data intrusion leads to many troubles for the users of cloud computing. The principle of this work is to survey the recent research on single clouds and multi- clouds to address the security risks and solutions. The research has been done to ensure the security of the single cloud and cloud storage whereas multi-clouds have received less attention in the area if security. We maintain the immigration to multi-clouds due to its ability to decrease security risks that affect the cloud computing user. For future work, we intend to offer a framework to supply a secure cloud database that wills assurance to avoid security risks facing the cloud computing community. This structure will be relevant multi-clouds and the secret sharing algorithm to decrease the risk of data intrusion and the loss of service accessibility in the cloud and ensure data integrity.

1] Sapthami, P.Srinivasulu, B. Murali Krishna. 'A Novel Approach to Cloud Computing Security over Single to Multi Clouds'Int. Journal of Engineering Research and Application Vol. 3, Issue 5, Sep-Oct 2013. pp.636-640
2] Cong Wang, Qian Wang, and Kui Ren gy Wenjing Lou 'Towards Secure and and Dependable Storage Services in Cloud Computing'. Institute IEEE Transactions on Cloud Computing Date of Publication: April-June 2012 Volume: 5.
3] Sajjad Hashemi1 International Journal of Security, Privacy and Trust Management ( IJSPTM) Vol 2, No 4, August 2013.
4] Johannes Braun Alexander Wiesmaier Johannes Buchmann ' On the Security Of Encrypted Secret Sharing' 2013 46th Hawaii International ty on System Sciences.
5] K Chandra Moulnali and U Sesadri ' Single to Multi Clouds for Security in cloud computing by using Secret Key Sharing' International journal of computers and technology Aug 15th,2013.

6] Kapila Sharma1, Kavita Kanwar1, Chanderjeet Yadav, 'Data Storage Security in Cloud Computing' International Journal of Computer Science and Management Research Vol 2 Issue 1 January 2013
7] Maulik Dave ' Data Storage Security in Cloud Computing' Volume 3, Issue 10, October 2013 International Journal of Advanced Research in Computer Science and Software Engineering.
8] Hyun-Suk Yu, Yvette E. Gelogo, Kyung Jung Kim. 'Securing Data Storage in Cloud computing' Journal of Security Engineering 9th March 2012.
9] B.Arun S.K.Prashanth 'Cloud Computing Security Using Secret Sharing Algorithm'VCE, Hyderabad India Volume : 2 Issue : 3 March 2013.
10] Priyanka Pareek 'Cloud Computing Security from Single to Multiclouds using Secret Sharing Algorithm' International Journal of Advanced Research in Computer Engineering & Technology Volume 2, Issue 12, December 2013
11] K.Valli Madhavi R. Tamilkodi R. BalaDinakar 'Data Storage Security in Cloud Computing for Ensuring Effective and Flexible Distributed' International Journal of Electronics Communication and Computer Engineering Volume 3.

12] Mohammed A. AlZain, Eric Pardede ,Ben Soh , James A. Thom. 'Cloud Computing Security: From Single to Multiclouds' 2012 45th Hawaii International Conference on System Sciences.
13] Deepanchakaravarthi Purushothaman1and Dr.Sunitha Abburu 'An Approach to Data Storage Security in Cloud Computing' IJCSI International Journal of Computer Science Issues, Vol. 9, Issue 2, No 1, March 2012.
14] Venkatarao Matte, L. Ravi Kumar ' A New Framework for Cloud Computing security using Secret Sharing Algorithm over Single to Multiclouds' International Journal of Computer Trends and Technology volume 4 -Issue 8 August 2013.
15] Md Kausar Alam , Sharmila Banu K 'An Approach Secret Sharing Algorithm in cloud computing security over single to multiclouds' International Journal of Scientific and Research Publications, Volume 3, Issue 4, April 2013.

16] Vijeyta Devi & Vadlamani Nagalakshmi 'A Prospective Appraoch On Security Wth Rsa Algorithm And Cloud Sql in Cloud Computing' International Journal of Computer Science and Engineering Vol. 2, Issue 2, May 2013.
17] Hyun-Suk Yu, Yvette E. Gelogo, Kyung Jung Kim 'Securing Data Storage in Cloud Computing' Journal of Security Engineering 9th May 2012.

Source: Essay UK -

About this resource

This Information Technology essay was submitted to us by a student in order to help you with your studies.

Search our content:

  • Download this page
  • Print this page
  • Search again

  • Word count:

    This page has approximately words.



    If you use part of this page in your own work, you need to provide a citation, as follows:

    Essay UK, Reaping Cloud By Ensuring Data Storage Security. Available from: <> [05-06-20].

    More information:

    If you are the original author of this content and no longer wish to have it published on our website then please click on the link below to request removal: