That's a great feature but not everyone needs that level of redundancy. If you already have copies of your data locally and are just using S3 as a convenient place to store data that is actively being accessed by services within the AWS infrastructure, RRS may be for you. It provides a much lower level of durability (99.99%) at a significantly lower cost. If that fits the bill for you, the next three code snippets will provide you with the basics you need to start using RRS in boto.
Create a New S3 Key Using the RRS Storage Class
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Create a new key in an S3 bucket and specify you want to use | |
# the Reduced Redundancy Storage (RRS) option of S3 | |
import boto | |
# create connection to S3 service | |
s3 = boto.connect_s3() | |
# lookup my existing bucket | |
bucket = s3.lookup('mybucket') | |
# create a new, empty Key object in that bucket | |
key = bucket.new_key('rrskey') | |
# store the content of the key and specify you want to use RRS | |
key.set_contents_from_filename('/home/mitch/mylocalfile.txt', reduced_redundancy=True) |
Convert An Existing S3 Key from Standard Storage Class to RRS
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Convert an existing key in an S3 bucket that uses the STANDARD storage class | |
# to one using the REDUCED_REDUNDANCY storage class. This uses the S3 COPY | |
# command to copy the key back to the same bucket. The ACL is preserved. | |
import boto | |
# create connection to S3 service | |
s3 = boto.connect_s3() | |
# lookup my existing bucket | |
bucket = s3.lookup('mybucket') | |
# lookup the Key you want to copy in the bucket | |
key = bucket.lookup('myoldkey') | |
# change the storage_class of the key from STANDARD to REDUCED_REDUNDANCY | |
key.change_storage_class('REDUCED_REDUNDANCY') |
Create a Copy of an Existing S3 Key Using RRS
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Copy an existing key in an S3 bucket and specify you want to use | |
# the Reduced Redundancy Storage (RRS) option of S3 for the new copy | |
import boto | |
# create connection to S3 service | |
s3 = boto.connect_s3() | |
# lookup my existing bucket | |
bucket = s3.lookup('mybucket') | |
# lookup the Key you want to copy in the bucket | |
key = bucket.lookup('nonrrskey') | |
# create a copy of the key in the same bucket and specify you want to use RRS | |
key.copy(key.bucket.name, 'rsskey', reduced_redundancy=True) |
S3's standard storage class is actually "just" 11 9's of durability.
ReplyDeleteThanks for posting this and implementing the storage class changing feature. I wrote a very simple extension of your script which will convert all the objects in a bucket to the reduced redundancy model.
ReplyDeletehttp://www.bryceboe.com/2010/07/02/amazon-s3-convert-objects-to-reduced-redundancy-storage/
I get this error
ReplyDeleteAttributeError: 'Provider' object has no attribute 'storage_class'
Does this mean my bucket does not have this RRS feature enabled?
I too get this "AttributeError: 'Provider' object has no attribute 'storage_class'"
ReplyDeleteAnyone found any solutions??
Hi -
ReplyDeleteCould you create an issue on the Google Code project site (http://boto.googlecode.com/)? Provide any details you can and then I can track this to resolution. Thanks!
Mitch
Hi,
ReplyDeleteI am trying to store new files to S3 using boto v 1.9x and this is what I am doing:-
headers['x-amz-acl'] = acl
headers['x-amz-storage-class'] = 'REDUCED_REDUNDANCY'
key.set_contents_from_filename(filename, headers,)
However, when I list the metadata of the key after I uploaded my file to S3, the storage class is 'STANDARD' and is not 'REDUCED_REDUNDANCY' as expected.
Did anyone try uploading to S3 using boto v1.9x and seen anything like this??
Thanks,
KT