ABSTRACT Data growth rates will proceed tobuild faster in the coming years.
Cloud computing provides a new way of serviceprovision by arranging various resources over internet. One of the importantcloud service amongst the existing services is storage of data. Data savedmight hold numerous copies of the same data. Data deduplication is one of the vitaltechniques, which compress the data by removing the duplicate copies of thesame data to reduce the storage space. In order to provide the data protection whichis to be stored on cloud, data are need be stored in the encrypted form.
Inproposed scheme the main purpose of this is to ensure that only one instance ofdata is stored, minimizes the amount of storage space, and provides optimizedstorage capacity. Here we design a effective approach which effectively reducesencryption overhead using compression and encryption method. INTRODUCTION Cloud computing is an IT that enablesaccess to shared pools of Configurable framework assets and more higher-leveladministrations that can be quickly provisioned for insignificant managementexertion, often over the Internet. Cloud computing services all work a littledifferently. Cloud computingrelies on sharing of resources to achieve coherence and economy of scale, similar to a utility. Butmany provide a friendly, browser-based dashboard that makes it easier for ITprofessionals and developers to order resources and manage their accounts. Somecloud computing services is also designed to work with APIs and CLI, providesdeveloper more option.
Some of the services that can be done with the cloud arecreating new app and service, storing, back up and recover data, stream audioand video. Cloud provides three types of services: IASS, PAAS, SAAS and threecloud deployments: public, private and hybrid.The idea of data deduplication was proposed to minimizethe storage space. It is also called as intelligent compression or singleinstance storage. In this paper we design and develop a new approach thateffectively deduplicates redundant data in document by using the concept ofobject level component resulting to less data chunking, uses fewer indexes andreduced need for tape backup. This technique focuses on improving utilizationand also can be applied to network data transfer to reduce number of bytes thatmust be sent.
Data deduplication can operate at file level, block level andeven at bit level. In file-level data deduplication, if any two files areexactly alike then only one copy of file need to be stored then subsequentiteration will have a pointer to files. The change in the single bit will needto store entire copy of different file. In block-level and bit-level datadeduplication it looks within a file, if file is updated then it saves only thechanged blocks between the two files. However file-level may require lessprocessing power due to smaller index and reduce the number of comparison butin block- level may take more processing power and use much larger index totrack the individual block.