You probably noticed, in the blitz of announcements from the recent I/O conference that Google now has a storage service very similar to Amazon's S3 service. The Google Storage (GS) service provides a REST API that is compatible with many existing tools and libraries.
In addition to the API, Google also announced some tools to make it easier for people to get started using the Google Storage service. The main tool is called gsutil and it provides a command line interface to both Google Storage and S3. It allows you to reference files in GS or S3 or even on your file system using URL-style identifiers. You can then use these identifiers to copy content to/from the storage services and your local file system, between locations within a storage service or even between the services. Cool!
What was even cooler to me personally was that gsutil leverages boto for API-level communication with S3 and GS. In addition, Google engineers have extended boto with a higher-level abstraction of storage services that implements the URL-style identifiers. The command line tools are then built on top of this layer.
As an open source developer, it is very satisfying when other developers use your code to do something interesting and this is certainly no exception. In addition, I want to thank Mike Schwartz from Google for reaching out to me prior to the Google Storage session and giving me a heads up on what they were going to announce. Since that time Mike and I have been collaborating to try to figure out the best way to support the use of boto in the Google Storage utilities. For example, the storage abstraction layer developed by Google to extend boto is generally useful and could be extended to other storage services.
In summary, I view this as a very positive step in the boto project. I look forward to working with Google to make boto more useful for them and for the community of boto users. And as always, feedback from the boto community is not only welcome but essential.
Sunday, May 23, 2010
Tuesday, April 20, 2010
Failure as a Feature
One need only peruse the EC2 forums a bit to realize that EC2 instances fail. Shock. Horror. Servers failing? What kind of crappy service is this, anyway. The truth, of course, is that all servers can and eventually will fail. EC2 instances, Rackspace CloudServers, GoGrid servers, Terremark virtual machines, even that trusty Sun box sitting in your colo. They all can fail and therefore they all will fail eventually.
What's wonderful and transformative about running your applications in public clouds like EC2 and CloudServers, etc. is not that the servers never fail but that when they do fail you can actually do something about it. Quickly. And programmatically. From an operations point of view, the killer feature of the cloud is the API. Using the API's, I can not only detect that there is a problem with a server but I can actually correct it. As easily as I can start a server, I can stop one and replace it with a new one.
Now, to do this effectively I really need to think about my application and my deployment differently. When you have physical servers in a colo failure of a server is, well, failure. It's something to be dreaded. Something that you worry about. Something that usually requires money and trips to the data center to fix.
But for apps deployed on the cloud, failure is a feature. Seriously. Knowing that any server can fail at any time and knowing that I can detect that and correct that programmatically actually allows me to design better apps. More reliable apps. More resilient and robust apps. Apps that are designed to keep running with nary a blip when an individual server goes belly up.
Trust me. Failure is a feature. Embrace it. If you don't understand that, you don't understand the cloud.
What's wonderful and transformative about running your applications in public clouds like EC2 and CloudServers, etc. is not that the servers never fail but that when they do fail you can actually do something about it. Quickly. And programmatically. From an operations point of view, the killer feature of the cloud is the API. Using the API's, I can not only detect that there is a problem with a server but I can actually correct it. As easily as I can start a server, I can stop one and replace it with a new one.
Now, to do this effectively I really need to think about my application and my deployment differently. When you have physical servers in a colo failure of a server is, well, failure. It's something to be dreaded. Something that you worry about. Something that usually requires money and trips to the data center to fix.
But for apps deployed on the cloud, failure is a feature. Seriously. Knowing that any server can fail at any time and knowing that I can detect that and correct that programmatically actually allows me to design better apps. More reliable apps. More resilient and robust apps. Apps that are designed to keep running with nary a blip when an individual server goes belly up.
Trust me. Failure is a feature. Embrace it. If you don't understand that, you don't understand the cloud.
Monday, April 19, 2010
Subscribing an SQS queue to an SNS topic
The new Simple Notification Service from AWS offers a very simple and scalable publish/subscribe service for notifications. The basic idea behind SNS is simple. You can create a topic. Then, you can subscribe any number of subscribers to this topic. Finally, you can publish data to the topic and each subscriber will be notified about the new data that has been published.
Currently, the notification mechanism supports email, http(s) and SQS. The SQS support is attractive because it means you can subscribe an existing SQS queue to a topic in SNS and every time information is published to that topic, a new message will be posted to SQS. That allows you to easily persist the notifications so that they could be logged or further processed at a later time.
Subscribing via the email protocol is very straightforward. You just provide an email address and SNS will send an email message to the address each time information is published to the topic (actually there is a confirmation step that happens first, also via email). Subscribing via HTTP(s) is also easy, you just provide the URL you want SNS to use and then each time information is published to the topic, SNS will POST a JSON payload containing the new information to your URL.
Subscribing an SQS queue, however, is a bit trickier. First, you have to be able to construct the ARN (Amazon Resource Name) of the SQS queue. Secondly, after subscribing the queue you have to set the ACL policy of the queue to allow SNS to send messages to the queue.
To make it easier, I added a new convenience method in the boto SNS module called subscribe_sqs_queue. You pass it the ARN of the SNS topic and the boto Queue object representing the queue and it does all of the hard work for you. You would call the method like this:
>>> import boto
>>> sns = boto.connect_sns()
>>> sqs = boto.connect_sqs()
>>> queue = sqs.lookup('TestSNSNotification')
>>> resp = sns.create_topic('TestSQSTopic')
>>> print resp
{u'CreateTopicResponse': {u'CreateTopicResult': {u'TopicArn': u'arn:aws:sns:us-east-1:963068290131:TestSQSTopic'},
u'ResponseMetadata': {u'RequestId': u'1b0462af-4c24-11df-85e6-1f98aa81cd11'}}}
>>> sns.subscribe_sqs_queue('arn:aws:sns:us-east-1:963068290131:TestSQSTopic', queue)
That should be all you have to do to subscribe your SQS queue to an SNS topic. The basic operations performed are:
The actual policy looks like this:
{"Version": "2008-10-17", "Statement": [{"Resource": "arn:aws:sqs:us-east-1:963068290131:TestSNSNotification", "Effect": "Allow", "Sid": "ad279892-1597-46f8-922c-eb2b545a14a8", "Action": "SQS:SendMessage", "Condition": {"StringLike": {"aws:SourceArn": "arn:aws:sns:us-east-1:963068290131:TestSQSTopic"}}, "Principal": {"AWS": "*"}}]}
The new subscribe_sqs_queue method is available in the current SVN trunk. Check it out and let me know if you run into any problems or have any questions.
Currently, the notification mechanism supports email, http(s) and SQS. The SQS support is attractive because it means you can subscribe an existing SQS queue to a topic in SNS and every time information is published to that topic, a new message will be posted to SQS. That allows you to easily persist the notifications so that they could be logged or further processed at a later time.
Subscribing via the email protocol is very straightforward. You just provide an email address and SNS will send an email message to the address each time information is published to the topic (actually there is a confirmation step that happens first, also via email). Subscribing via HTTP(s) is also easy, you just provide the URL you want SNS to use and then each time information is published to the topic, SNS will POST a JSON payload containing the new information to your URL.
Subscribing an SQS queue, however, is a bit trickier. First, you have to be able to construct the ARN (Amazon Resource Name) of the SQS queue. Secondly, after subscribing the queue you have to set the ACL policy of the queue to allow SNS to send messages to the queue.
To make it easier, I added a new convenience method in the boto SNS module called subscribe_sqs_queue. You pass it the ARN of the SNS topic and the boto Queue object representing the queue and it does all of the hard work for you. You would call the method like this:
>>> import boto
>>> sns = boto.connect_sns()
>>> sqs = boto.connect_sqs()
>>> queue = sqs.lookup('TestSNSNotification')
>>> resp = sns.create_topic('TestSQSTopic')
>>> print resp
{u'CreateTopicResponse': {u'CreateTopicResult': {u'TopicArn': u'arn:aws:sns:us-east-1:963068290131:TestSQSTopic'},
u'ResponseMetadata': {u'RequestId': u'1b0462af-4c24-11df-85e6-1f98aa81cd11'}}}
>>> sns.subscribe_sqs_queue('arn:aws:sns:us-east-1:963068290131:TestSQSTopic', queue)
That should be all you have to do to subscribe your SQS queue to an SNS topic. The basic operations performed are:
- Construct the ARN for the SQS queue. In our example the URL for the queue is https://queue.amazonaws.com/963068290131/TestSNSNotification but the ARN would be "arn:aws:sqs:us-east-1:963068290131:TestSNSNotification"
- Subscribe the SQS queue to the SNS topic
- Construct a JSON policy that grants permission to SNS to perform a SendMessage operation on the queue. See below for an example of the JSON policy.
- Associate the new policy with the SQS queue by calling the set_attribute method of the Queue object with an attribute name of "Policy" and the attribute value being the JSON policy.
The actual policy looks like this:
{"Version": "2008-10-17", "Statement": [{"Resource": "arn:aws:sqs:us-east-1:963068290131:TestSNSNotification", "Effect": "Allow", "Sid": "ad279892-1597-46f8-922c-eb2b545a14a8", "Action": "SQS:SendMessage", "Condition": {"StringLike": {"aws:SourceArn": "arn:aws:sns:us-east-1:963068290131:TestSQSTopic"}}, "Principal": {"AWS": "*"}}]}
The new subscribe_sqs_queue method is available in the current SVN trunk. Check it out and let me know if you run into any problems or have any questions.
Thursday, February 25, 2010
Stupid Boto Tricks #2 - Reliable Counters in SimpleDB
As a follow-up to yesterday's article about the new consistency features in SimpleDB, I came up with a handy little class in Python to implement a reliable integer counter in SimpleDB. The Counter class makes use of the consistent reads and conditional puts now available in SimpleDB to create a very Pythonic object that acts like an integer object in many ways but also manages the synchronization with the "true" counter object stored in SimpleDB.
The source code can be found in my bitbucket.org repo. I have copied the doc string from the class below to give an example of how the class can be used. Comments, questions and criticisms welcome. As with all Stupid Boto Tricks, remember the code is hot off the presses. Use with appropriate skepticism.
The source code can be found in my bitbucket.org repo. I have copied the doc string from the class below to give an example of how the class can be used. Comments, questions and criticisms welcome. As with all Stupid Boto Tricks, remember the code is hot off the presses. Use with appropriate skepticism.
A consistent integer counter implemented in SimpleDB using new
consistent read and conditional put features.
Usage
-----
To create the counter initially, you need to instantiate a Counter
object, passing in the name of the SimpleDB domain in which you wish
to store the counter, the name of the of the counter within the
domain and the initial value of the counter.
>>> import counter
>>> c = counter.Counter('mydomain', 'counter1', 0)
>>> print c
0
>>>
You can now increment and decrement the counter object using
the standard Python operators:
>>> c += 1
>>> print c
1
>>> c -= 1
>>> print c
0
These operations are automatically updating the value in SimpleDB
and also checking for consistency. You can also use the Counter
object as an int in normal Python comparisons:
>>> c == 0
True
>>> c < 1
True
>>> c != 0
False
If you have multiple processes accessing the same counter
object it will be possible for your view of the Python to become
out of sync with the value in SimpleDB. If this happens, it will
be automatically detected by the Counter object. A ValueError
exception will be raised and the current state of your Counter
object will be updated to reflect the most recent value stored
in SimpleDB.
>>> c += 1
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
...
ValueError: Counter was out of sync
>>> print c
2
>>>
In addition to storing the value of the counter in SimpleDB, the
Counter also stores a timestamp of the last update in the form of
an ISO8601 string. You can access the timestamp using the
timestamp attribute of the Counter object:
>>> c.timestamp
'2010-02-25T13:49:15.561674'
>>>
Wednesday, February 24, 2010
Pick Your SimpleDB Flavor: AP or CP?
Back around 2000, a fellow named Eric Brewer posited something called the CAP theorem. The basic tenants of this theorem are that in the world of shared data, distributed computing there are three basic properties; data consistency, system availability and tolerance to network partitioning, and only 2 of the 3 properties can be achieved at any given time (see Werner Vogel's article or this paper for more details on CAP).
SimpleDB is a great service from AWS that provides a fast, scalable metadata store that I find useful in many different systems and applications. When viewed through the prism of the CAP theorem, SimpleDB provides system availability (A) and tolerance to network partitioning (P) at the expense of consistency (C). So, as a AP system it means users have to understand and deal with the lack of consistency or "eventual consistency". For many types of systems, this lack of consistency is not a problem and given that the vast majority of writes to SimpleDB are consistent in a short period of time (most in less than a second) it's not a big deal.
But what happens if you really do need consistency? For example, let's say you want to store a user's session state in SimpleDB. Each time the user makes another request on your web site you will want to pull their saved session data from the database. But if that state is not guaranteed to be the most current data written it will cause problems for your user. Or you may have a requirement to implement an incrementing counter. Without consistency, such a requirement would be impossible. Which would mean that using SimpleDB for those types of applications would be out of the question. Until now...
Pick Your Flavor
SimpleDB now provides a new set of API requests that let you perform reads and writes in a consistent manner (see this for details). For example, I can now look up an item in SimpleDB or perform a search and specify that I want the results to be consistent. By specifying a consistent flag in these requests, SimpleDB will guarantee that the results returned will be consistent with all write operations received by the SimpleDB prior to the read or query request.
Similarly, you can create or update a value of an object in SimpleDB and provide with the request information about what you expect the current value of that object to be. If your expected values differ from the actual values currently stored in SimpleDB, an exception will be raised and the value will not be updated.
Of course, nothing is free. By insisting on Consistency, the CAP theorem says that we must be giving up on one of the other properties. In this case, we are giving up on is Availability. Basically, if we want the system to give us consistent data then it simply won't be able to respond as quickly as before. It will have to wait until it knows the state is consistent and while it is waiting, the system is unavailable to your application. Of course, that's exactly how every relational database you have ever used works so that should be no surprise. But if performance and availability are your main goals, you should use these Consistency features sparingly.
Give It A Try
The boto subversion repository has already been updated with code that supports these new consistency features. The API changes are actually quite small; a new, optional consistent_read parameter to methods like get_attributes and select and a new, optional expected_values parameter to methods like put_attributes and delete_attributes. I'll be posting some example code here soon.
SimpleDB is a great service from AWS that provides a fast, scalable metadata store that I find useful in many different systems and applications. When viewed through the prism of the CAP theorem, SimpleDB provides system availability (A) and tolerance to network partitioning (P) at the expense of consistency (C). So, as a AP system it means users have to understand and deal with the lack of consistency or "eventual consistency". For many types of systems, this lack of consistency is not a problem and given that the vast majority of writes to SimpleDB are consistent in a short period of time (most in less than a second) it's not a big deal.
But what happens if you really do need consistency? For example, let's say you want to store a user's session state in SimpleDB. Each time the user makes another request on your web site you will want to pull their saved session data from the database. But if that state is not guaranteed to be the most current data written it will cause problems for your user. Or you may have a requirement to implement an incrementing counter. Without consistency, such a requirement would be impossible. Which would mean that using SimpleDB for those types of applications would be out of the question. Until now...
Pick Your Flavor
SimpleDB now provides a new set of API requests that let you perform reads and writes in a consistent manner (see this for details). For example, I can now look up an item in SimpleDB or perform a search and specify that I want the results to be consistent. By specifying a consistent flag in these requests, SimpleDB will guarantee that the results returned will be consistent with all write operations received by the SimpleDB prior to the read or query request.
Similarly, you can create or update a value of an object in SimpleDB and provide with the request information about what you expect the current value of that object to be. If your expected values differ from the actual values currently stored in SimpleDB, an exception will be raised and the value will not be updated.
Of course, nothing is free. By insisting on Consistency, the CAP theorem says that we must be giving up on one of the other properties. In this case, we are giving up on is Availability. Basically, if we want the system to give us consistent data then it simply won't be able to respond as quickly as before. It will have to wait until it knows the state is consistent and while it is waiting, the system is unavailable to your application. Of course, that's exactly how every relational database you have ever used works so that should be no surprise. But if performance and availability are your main goals, you should use these Consistency features sparingly.
Give It A Try
The boto subversion repository has already been updated with code that supports these new consistency features. The API changes are actually quite small; a new, optional consistent_read parameter to methods like get_attributes and select and a new, optional expected_values parameter to methods like put_attributes and delete_attributes. I'll be posting some example code here soon.
Monday, February 15, 2010
The Softer Side of Scale
In Lori MacVittie's latest blog, "The Devil Is In The Details" she not only bestows upon me the honor of my own Theorem (yeah, in your face Pythagorus) she also gives a number of great examples of some of the necessary dimensions of scale beyond just the number of servers.
But besides things like networking and bandwidth, there is a softer side of scale that is equally important: people. You need a certain critical mass of support, billing, operations, development teams, security, sales, developer support, evangelists, etc. to create a viable service offering and economies of scale apply to these dimensions just as in hardware.
There may be niche markets where small providers can provide some unique value-add (specialized security procedures, vertical focus, non-standard technology stacks, etc.) but in general I think the dominance of scale is inevitable. As a developer I love the flexibility and programmability of cloud computing services but ultimately the trump card for businesses is cost and the best way to drive cost down is via scale.
Over the next five years, I think the majority of cloud computing will happen on public clouds and that the public cloud landscape will consist mainly of a relatively small number of big players who will be able to scale their services, both the hard side and the soft side, to achieve the economies of scale required in the marketplace.
But besides things like networking and bandwidth, there is a softer side of scale that is equally important: people. You need a certain critical mass of support, billing, operations, development teams, security, sales, developer support, evangelists, etc. to create a viable service offering and economies of scale apply to these dimensions just as in hardware.
There may be niche markets where small providers can provide some unique value-add (specialized security procedures, vertical focus, non-standard technology stacks, etc.) but in general I think the dominance of scale is inevitable. As a developer I love the flexibility and programmability of cloud computing services but ultimately the trump card for businesses is cost and the best way to drive cost down is via scale.
Over the next five years, I think the majority of cloud computing will happen on public clouds and that the public cloud landscape will consist mainly of a relatively small number of big players who will be able to scale their services, both the hard side and the soft side, to achieve the economies of scale required in the marketplace.
Tuesday, February 9, 2010
Using S3 Versioning and MFA to CMA*
* - CMA = Cover My Ass
Amazon's Simple Storage Service (S3) is a great way to safely store loads of data in the cloud. It's highly available, simple to use and provides good data durability by automatically copying your data across multiple regions and/or zones. With over 80 billion objects stored (at last published count) I'm clearly not alone in thinking it's a good thing.
The only problem I've had with S3 over the years is the queazy feeling I get when I think about some nefarious individual getting hold of my AWS AccessKey/SecretKey. Since all S3 capabilities are accessed via a REST API and since that credential pair is used to authenticate all requests with S3, a bad guy/girl with my credentials (or a temporarily stupid version of me) could potentially delete all of the content I have stored in S3. That represents the "Worst Case Scenario" of S3 usage and I've spent a considerable amount of time and effort trying to find ways to mitigate this risk.
Using multiple AWS accounts can help. The Import/Export feature is another way to mitigate your exposure. But what I've always wanted was a WORM (Write Once Read Many) bucket. Well, not always, but at least since May 6, 2007. That would give me confidence that the data I store in S3 could not be accidentally or maliciously deleted. This kind of feature would also provide some interesting functionality for certain types of compliance and regulatory solutions.
Starting today, AWS has released a couple of really useful new features in S3: Versioning and MFADelete. Together, these features provide just about everything I wanted when I asked for a WORM bucket. So, how do they work?
Versioning
Versioning allows you to have multiple copies of the same object. Each version has a unique version ID and the versions are kept in ascending order by the date the version was created. Each bucket can be configured to either enable or disable versioning (only by the bucket owner) and the basic behavior is shown below in the table. The behavior of a Versioned bucket differs based on whether it is being accessed by a Version-Aware (VA) client or NonVersion-Aware (NVA) client.
The above table is just a summary. You should see the S3 documentation for full details but even this summary clearly shows the benefits of versioning. If I enable versioning on a bucket, the chance of accidentally deleting content is greatly reduced. I would have to be using a version-aware delete tool and explicitly referencing individual version ID's to permanently delete them.
So, accidental deletion of content is less of a risk with versioning but how about the other risk? If a bad guy/girl gets my AccessKey/SecretKey, they can still delete all of my content as long as they know how to use the versioning feature of S3. To address this threat, S3 has implemented a new feature called MFADelete.
MFADelete
MFADelete uses the Multi-Factor Authentication device you are already using to protect AWS Portal and Console access. What? You aren't using the MFA device? Well, you should go sign up for one right now. It's well worth the money, especially if you are storing important content in S3.
Like Versioning, MFADelete can be enabled on a bucket-by-bucket basis and only by the owner of the bucket. But, rather than just trusting that the person with the AccessKey/SecretKey is the owner, MFADelete uses the MFA device to provide an additional factor of authentication. To enable MFADelete, you send a special PUT request to S3 with an XML body that looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Status>Enabled</Status>
<MfaDelete>Enabled</MfaDelete>
</VersioningConfiguration>
In addition to this XML body, you also need to send a special HTTP header in the request, like this:
x-amz-mfa: <serial number of MFA device> <token from MFA device>
Once this request has been sent, all delete operations on the bucket and all requests to change the MFADelete status for the bucket will also require the special HTTP header with the MFA information. So, that means that even if the bad guy/girl gets your AccessKey/SecretKey combo they still won't be able to delete anything from your MFADelete-enabled bucket without the MFA device, as well.
It's not exactly the WORM bucket I was originally hoping for but it's a huge improvement and greatly reduces the risk of accidental or malicious deletion of data from S3. I got my pony!
The code in the boto subversion repo has already been updated to work with the new Versioning and MFADelete features. A new release will be out in the near future. I have included a link below to a unit test script that shows most of the basic operations and should give you a good start on incorporating these great new features into your application. The script prompts for you for the serial number of your MFA device once and then prompts for a new MFA code each time on is required. You can only perform one operation with each code so you will have to wait for the device to cycle to the next code between each operation.
Example Code
Amazon's Simple Storage Service (S3) is a great way to safely store loads of data in the cloud. It's highly available, simple to use and provides good data durability by automatically copying your data across multiple regions and/or zones. With over 80 billion objects stored (at last published count) I'm clearly not alone in thinking it's a good thing.
The only problem I've had with S3 over the years is the queazy feeling I get when I think about some nefarious individual getting hold of my AWS AccessKey/SecretKey. Since all S3 capabilities are accessed via a REST API and since that credential pair is used to authenticate all requests with S3, a bad guy/girl with my credentials (or a temporarily stupid version of me) could potentially delete all of the content I have stored in S3. That represents the "Worst Case Scenario" of S3 usage and I've spent a considerable amount of time and effort trying to find ways to mitigate this risk.
Using multiple AWS accounts can help. The Import/Export feature is another way to mitigate your exposure. But what I've always wanted was a WORM (Write Once Read Many) bucket. Well, not always, but at least since May 6, 2007. That would give me confidence that the data I store in S3 could not be accidentally or maliciously deleted. This kind of feature would also provide some interesting functionality for certain types of compliance and regulatory solutions.
Starting today, AWS has released a couple of really useful new features in S3: Versioning and MFADelete. Together, these features provide just about everything I wanted when I asked for a WORM bucket. So, how do they work?
Versioning
Versioning allows you to have multiple copies of the same object. Each version has a unique version ID and the versions are kept in ascending order by the date the version was created. Each bucket can be configured to either enable or disable versioning (only by the bucket owner) and the basic behavior is shown below in the table. The behavior of a Versioned bucket differs based on whether it is being accessed by a Version-Aware (VA) client or NonVersion-Aware (NVA) client.
| Operation | Unversioned Bucket | Versioned Bucket - NVA Client | Versioned Bucket - VA Client |
|---|---|---|---|
| GET | Retrieves the object or a 404 if the object is not found | Retrieves the latest version or a 404 if a Delete Marker is found | Retrieves the version specified by provided version ID |
| PUT | Stores the content in the bucket, overwriting any existing content | Stores content as new version | Stores content as new version |
| DELETE | Irrevocably deletes the content | Stores a DeleteMarker as latest version of object. | Permanently deletes version specified by provided version ID |
The above table is just a summary. You should see the S3 documentation for full details but even this summary clearly shows the benefits of versioning. If I enable versioning on a bucket, the chance of accidentally deleting content is greatly reduced. I would have to be using a version-aware delete tool and explicitly referencing individual version ID's to permanently delete them.
So, accidental deletion of content is less of a risk with versioning but how about the other risk? If a bad guy/girl gets my AccessKey/SecretKey, they can still delete all of my content as long as they know how to use the versioning feature of S3. To address this threat, S3 has implemented a new feature called MFADelete.
MFADelete
MFADelete uses the Multi-Factor Authentication device you are already using to protect AWS Portal and Console access. What? You aren't using the MFA device? Well, you should go sign up for one right now. It's well worth the money, especially if you are storing important content in S3.
Like Versioning, MFADelete can be enabled on a bucket-by-bucket basis and only by the owner of the bucket. But, rather than just trusting that the person with the AccessKey/SecretKey is the owner, MFADelete uses the MFA device to provide an additional factor of authentication. To enable MFADelete, you send a special PUT request to S3 with an XML body that looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Status>Enabled</Status>
<MfaDelete>Enabled</MfaDelete>
</VersioningConfiguration>
In addition to this XML body, you also need to send a special HTTP header in the request, like this:
x-amz-mfa: <serial number of MFA device> <token from MFA device>
Once this request has been sent, all delete operations on the bucket and all requests to change the MFADelete status for the bucket will also require the special HTTP header with the MFA information. So, that means that even if the bad guy/girl gets your AccessKey/SecretKey combo they still won't be able to delete anything from your MFADelete-enabled bucket without the MFA device, as well.
It's not exactly the WORM bucket I was originally hoping for but it's a huge improvement and greatly reduces the risk of accidental or malicious deletion of data from S3. I got my pony!
The code in the boto subversion repo has already been updated to work with the new Versioning and MFADelete features. A new release will be out in the near future. I have included a link below to a unit test script that shows most of the basic operations and should give you a good start on incorporating these great new features into your application. The script prompts for you for the serial number of your MFA device once and then prompts for a new MFA code each time on is required. You can only perform one operation with each code so you will have to wait for the device to cycle to the next code between each operation.
Example Code
Subscribe to:
Comments (Atom)