tag:blogger.com,1999:blog-12314800446197218572024-03-13T12:13:05.380-04:00ElasticianThe springy new world of computing.Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.comBlogger43125tag:blogger.com,1999:blog-1231480044619721857.post-65292656465914376432014-02-04T15:09:00.002-05:002014-02-04T15:09:32.489-05:00Back To The FutureToday is my last day at Amazon Web Services.<br />
<br />
I have accepted a position as Director of Cloud Operations at <a href="http://scopely.com/" target="_blank">Scopely</a>. Scopely is an L.A.-based startup focusing on multiplayer mobile games. They have had a number of very successful games and have a clear vision of where they want to go. And they are all-in on AWS. So, I will be using boto and the AWS CLI and a host of other tools to help them achieve their goals and use AWS efficiently and effectively. They are a great group of people and I'm really excited and grateful for the opportunity. I originally created boto so I could use it to build cool things on AWS and I'm looking forward to doing that again.<br />
<br />
Leaving AWS is difficult. I've been a customer since the beginning but being part of AWS as an employee has been a fantastic experience. The things that really stand out for me are:<br />
<br />
<ul>
<li><b>People</b> - Over the past two years I've been gobsmacked by the consistently high quality of the people I've worked with at AWS. It's remarkable.</li>
<li><b>Innovate and Iterate</b> - People talk about the pace of innovation at AWS and it is very impressive. But innovation is just the fun part, the sexy part. What utltimately leads to success is the patient, persistent, customer-focused iteration that occurs after the initial "A ha!" moment. I've never seen anyone do it better.</li>
<li><b>Support for Open Source</b> - When I joined AWS, I brought with me a mature and vibrant open source project. There were innumerable ways things could have gone pear-shaped. But they didn't. We worked together to build a partnership that allowed AWS to contribute while also allowing the boto community to contribute as they always have. In addition to boto, all of the other AWS SDK's are released as open source and welcome contributions. In my experience, AWS has shown a real respect and appreciation for open source software and the communities that emerge around it.</li>
</ul>
Boto has been immeasurably improved by AWS's participation and I am glad to know it will continue in the future.Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com7tag:blogger.com,1999:blog-1231480044619721857.post-85784962443538205162011-12-16T15:31:00.000-05:002011-12-16T18:52:08.184-05:00Looking at Clouds from Both Sides NowI'll apologize up front for that horrible pun in the title. No excuse, really.<br />
<br />
After 18 months at Eucalyptus, the best private cloud vendor out there, I have decided to see what things are like on the public cloud side. As of Monday, December 19, I will be a senior engineer at Amazon Web Services.<br />
<br />
I was very reluctant to leave Eucalyptus. It is a great company, full of great people and with a corporate culture that absolutely cannot be beat. And, while a lot of people's attention has been focused on shiny new things over the past year, Eucalyptus has quietly and steadily built amazing sales, support, marketing and professional services teams to match their already awesome engineering team. 2012 is going to be another kick-ass year for Eucalyptus and I really hate to miss that.<br />
<br />
But the idea of seeing how the sausage is made at the biggest public cloud is an opportunity I couldn't pass up. In my new job, I will still be focusing on software tools and how to make it easier for developer's to use cloud infrastructures, both public and private. I will still be doing a lot of Python stuff and definitely still making sure that boto stays a popular, useful and independent open source project just as it did while I was at Eucalyptus.<br />
<br />
It should be fun!<br />
<br />
<br />Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com8tag:blogger.com,1999:blog-1231480044619721857.post-70471874489711506372011-12-07T11:41:00.001-05:002011-12-07T11:55:18.324-05:00Don't reboot me, bro!If you are an AWS user with EC2 instances running, you may have already gotten an email from AWS informing you that your instance(s) will be rebooted in the near future. I'm not exactly sure what is prompting this massive rebooting binge but the good folks at AWS have actually provided a new EC2 API request just so you can find out about upcoming maintenance events planned for your instances.<br />
<br />
We just committed code to boto that adds support for the new DescribeInstanceStatus request. Using this, you can programmatically query the status of any or all of your EC2 instances and find out if there is a reboot in their future and, if so, when to expect it.<br />
<br />
Here's an example of using the new method and accessing the data returned by it.<br />
<br />
<br />
<script src="https://gist.github.com/1443559.js?file=gistfile1.py">
</script>Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com7tag:blogger.com,1999:blog-1231480044619721857.post-43650399059888321522011-11-13T17:44:00.001-05:002011-11-13T19:02:16.578-05:00Mapping Requests to EC2 API Versions<div class="separator" style="clear: both; text-align: left;">
I recently did some analysis of the EC2 API. I wanted to look at the API over time so I could remember which API requests were added in each of the 23 separate versions of the API over the past 5 years. The results were kind of interesting and I thought it would be worthwhile to share them here.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
The following image shows a graph of the number of requests over time. If you click on the image, you will see a high-res PNG version of the information that lets you zoom in to get much greater detail. The reddish color section of each of the bars in the bar graph actually contain the names of the individual requests added in each version but those are really only readable in the high-res version of the graphic. </div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://garnaat_pub.s3.amazonaws.com/ec2_api_versions.png" target="_blank"><img border="0" height="296" src="http://1.bp.blogspot.com/-HhOYR7lfHAY/TsBV6uPC70I/AAAAAAAAAEw/u6EYXXcAdns/s400/EC2+API+Versions.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Note that this analysis is only looking at the request level. I'm not diving deeper to look at the individual parameters in each requests which, in some cases, have also changed over time. I may do that analysis at some point but it's a huge amount of work and I doubt that I'll find the time.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
The raw JSON data behind this can be found in the <a href="https://github.com/garnaat/missingcloud/" target="_blank">missingcloud github repo</a>.</div>Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com1tag:blogger.com,1999:blog-1231480044619721857.post-7609566864846729622011-10-31T10:41:00.001-04:002011-10-31T10:41:41.095-04:00Python and AWS Cookbook Available<div class="separator" style="clear: both; text-align: center;">
<a href="http://akamaicovers.oreilly.com/images/0636920020202/cat.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://akamaicovers.oreilly.com/images/0636920020202/cat.gif" /></a></div>
<br />
I recently completed a short book for O'Reilly called "<a href="http://shop.oreilly.com/product/0636920020202.do">Python and AWS Cookbook</a>". It's a collection of recipes for solving common problems in Amazon Web Services. The solutions are all in Python and, of course, use <a href="https://github.com/boto/boto">boto</a> heavily. The focus of this book is EC2 and S3 although there are a couple of quick detours into IAM and SNS. Many of the examples also work with<a href="http://eucalyptus.com/"> Eucalyptus</a> so I have included some information about using boto with Eucalyptus as well as with <a href="http://code.google.com/apis/storage/docs/getting-started.html">Google Cloud Storage</a> for some of the S3-related recipes.<br />
<br />
You can get a hardcopy of the book but if you buy the e-book, you get free updates and I am expecting to do quite a few updates. Many of the recipes came from problems people have posted on the <a href="http://groups.google.com/group/boto-users">boto users group</a> or on the boto IRC channel but I'm sure there are lots of other areas where additional example code would be useful. If you have specific requests, let me know. Depending on the response, I might also do additional cookbooks that focus on other services.<br />
<br />
The bird on the cover is a Sand Grouse. I lobbied heavily for a Honey Badger but to no avail.Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com10tag:blogger.com,1999:blog-1231480044619721857.post-16828943984090058442011-10-14T13:45:00.000-04:002011-10-14T13:45:49.427-04:00Does Python Scale?I wonder how many times I've been asked that question over the years. Often, it's not even in the form of a question (Sorry, Mr. Trebek) but rather stated emphatically; "Python doesn't scale". This can be the start of long, heated discussions involving Global Interpreter Locks, interpreters vs. compilers, dynamic vs. static typing, etc. These discussions rarely end satisfactorily for any of the parties involved. And rarely are any opinions changed as a result.<br />
<br />
So, does Python scale?<br />
<br />
Well, YouTube is written mostly in Python. DropBox is written almost entirely in Python. Reddit. Quora. Disqus. FriendFeed. These are huge sites, handling gazillions of hits a day. They are written in Python. Therefore, Python scales.<br />
<br />
Yeah, but what about that web app I wrote that one time. Hosted on a cheapo, oversubscribed VPS, running straight CGI talking to a remote MySQL database running in a virtual machine on my Macbook Air. That thing fell over like a drunken sailor when I invited a few of my friends to go check it out. So, yeah. Forget what I said before. Obviously Python doesn't scale.<br />
<br />
The truth is, it's the wrong question. The stuff that allows Dropbox to <a href="http://highscalability.com/blog/2011/3/14/6-lessons-from-dropbox-one-million-files-saved-every-15-minu.html">store a million files every 15 minutes</a> has little to do with Python just as the things that caused my feeble web app to fail had little to do with Python. It has to do with the overall architecture of the application. How databases are sharded, how loosely or tightly components have been coupled, how you monitor, and how you react to the data your monitoring is providing you. And lots of other stuff. But you have to deal with those issues no matter what language you write the system in.<br />
<br />
No reasonable choice of computer language is going to guarantee your success or your failure. So pick the one you are most productive in and focus on properly architecting your app. That scales.Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com5tag:blogger.com,1999:blog-1231480044619721857.post-32726230842390604622011-10-13T09:10:00.001-04:002011-10-13T09:55:25.783-04:00Accessing the Eucalyptus Community Cloud with botoThe <a href="http://open.eucalyptus.com/try/community-cloud">Eucalyptus Community Cloud (ECC)</a> is a great resource that allows you to try out a real cloud computing system without installing any software or incurring any costs. It's a sandbox environment that is maintained by Eucalyptus Systems to allow people to testdrive Eucalyptus software and experiment with cloud computing.<br />
<br />
To access the ECC, you need to sign up following the instructions <a href="http://open.eucalyptus.com/try/community-cloud#Signup">here</a>. Once you are signed up, you will be able to download a zip file containing the necessary credentials for accessing the ECC. If you unzip that file somewhere on your local filesystem you will find, among other things, a file called <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">eucarc</span>. The contents of that file will look something like this:<br />
<br />
<script src="https://gist.github.com/1284103.js?file=eucarc">
</script><br />
<br />
To get things to work seamlessly in boto, you need to copy a few pieces of information from the <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">eucarc</span> file to your boto config file, which is normally found in<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;"> ~/.boto</span>. Here's the info you need to add. The actual values, of course, should be the ones from your own eucarc file.<br />
<br />
<script src="https://gist.github.com/1284158.js?file=boto.cfg">
</script><br />
<br />
Notice that the values needed for <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">eucalyptus_host</span> and <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">walrus_host</span> are just the hostname or ip address of the server as specified in the <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">EC2_HOST</span> and <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">S3_HOST</span> variables. You don't have to include the port number or the <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">http</span> prefix. Having edited your boto config file, you can now easily access the ECC services in boto.<br />
<br />
<script src="https://gist.github.com/1284171.js?file=gistfile1.txt">
</script><br />
This example assumes you are using the latest version of boto from <a href="https://github.com/boto/boto">github</a> or the <a href="https://github.com/boto/boto/tags">release candidate</a> for version 2.1 of boto.Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com6tag:blogger.com,1999:blog-1231480044619721857.post-87982444009155490712011-02-22T08:39:00.001-05:002011-02-22T08:43:13.069-05:00Accessing the Internet Archive with botoA recent <a href="https://twitter.com/peteskomoroch/status/39962478301552640">tweet</a> from Pete Skomoroch twigged me to the fact that the <a href="http://www.archive.org/">Internet Archive</a> provides an <a href="http://www.archive.org/help/abouts3.txt">S3-like API</a>. Cool! The Internet Archive is a great resource which provides, in their words:<br />
<blockquote><span class="Apple-style-span" style="border-collapse: collapse;"><span class="Apple-style-span" style="font-family: Times, 'Times New Roman', serif;">...a digital library of Internet sites and other cultural artifacts in digital form. Like a paper library, we provide free access to researchers, historians, scholars, and the general public.</span></span></blockquote>Since boto supports S3 I wondered if it would be possible to access the Internet Archive's API with boto. Turns out, it's quite simple. To make it even simpler, I've added a new "connect_ia" method. Before you can use this, you need to get API credentials from the Internet Archive but fortunately that's really easy. Just sign up for an account (if you don't already have one) and then go to <a href="http://www.archive.org/account/s3.php">this</a> link to generate the API keys.<br />
<br />
Once you have your credentials, the easiest thing to do is to add the credentials to your boto config file. They need to go in the Credentials section like this:<br />
<br />
<script src="https://gist.github.com/838663.js?file=boto_config_ia.cfg"></script><br />
<br />
Then, you can create a connection to the Internet Archive like this:<br />
<br />
<script src="https://gist.github.com/838665.js?file=access_ia_with_boto.py"></script><br />
<br />
I've only tested this a bit so if you run into any problems with it, post a message to <a href="http://groups.google.com/group/boto-users">boto-users</a> or create an issue.Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com2tag:blogger.com,1999:blog-1231480044619721857.post-81597583489117051502011-02-17T21:14:00.003-05:002011-02-17T21:24:51.262-05:00All Your Website Are Belong to S3One of the most commonly requested features for S3 has been the ability to have it act more like a web server. In other words, to be able to put an index.html file into a bucket and then point someone to that bucket and they see your website. I found requests for this on the S3 forum dating back to <a href="https://forums.aws.amazon.com/thread.jspa?messageID=47360&#47360">June 2006</a>. I'm pretty sure if you search around in the forums long enough you will see posts from me predicting S3 would never have this feature.<br />
<br />
Well, as is so often the case, I have been proven wrong. AWS has just announced a new feature of S3 that lets you easily host static websites entirely on S3. The features are pretty simple to use. The basic process is:<br />
<br />
<ul><li>Create a bucket to hold your website (or use an existing one)</li>
<li>Make sure the bucket is readable by the world</li>
<li>Upload your website content including the default page (usually index.html) and a optional page to display in case of errors</li>
<li>Configure your bucket for use as a website (using a new API call)</li>
<li>Access your website via the new hostname S3 provides for website viewing. You can also create CNAME aliases, etc. to map the bucket name to your own domain name</li>
</ul><div>The following Python code provides an example of all of the above steps.</div><div><br />
</div><div><script src="https://gist.github.com/833135.js?file=s3_website.py">
</script></div><div><br />
</div><div>I could now access my website using the following special link:<br />
<br />
<a href="http://garnaat-website-2.s3-website-us-west-1.amazonaws.com/">http://garnaat-website-2.s3-website-us-west-1.amazonaws.com/</a><br />
<br />
I could also use the CNAME aliasing features of S3 to map my S3 website to my own domain (which is probably what most people will want to do). It's a great new feature for S3 and something that should prove useful to a lot of people.</div>Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com6tag:blogger.com,1999:blog-1231480044619721857.post-83591136943487469482011-01-27T10:51:00.002-05:002011-01-27T11:06:16.828-05:00It Takes a Village...There are a lot of reasons someone might want to start an open source software project. You may be motivated by idealistic notions like freedom (as in both "free speech" and "free beer"), contribution to a community, and the desire for higher quality software due to the many eyes that scrutinize the code. Or, you could be motivated by more base desires like reputation, influence and the potential for paid work. Whatever the motivation though, it's clear that to truly achieve any of these goals you need people to take notice. You need users. You need a community.<br />
<br />
Another thing that has become clear to me over the past six months or so is that one of the best ways to build a community for an open source project is to host it on <a href="http://github.com/">github.com</a>. I'm not really sure why this is true. The underlying git DVCS system is certainly very powerful and efficient but it also can be cryptic and unintuitive at times. It could be the very distributed nature of git and github but there are other DVCS out there with hosted, centralized master repos. They are all good but they don't seem to be as good at motivating people to participate as github. Maybe github have just hit on just the right combination of power, flexibility and gee-whiz GUI. Whatever the reason, the results for the boto project have been pretty amazing so far.<br />
<br />
In the six months we have been on github.com (our first commit there was on July 12th, 2010), we have:<br />
<ul><li>290 people watching the boto repository.</li>
<li>66 people who have forked, or copied, the repository to allow them to experiment on their own.</li>
<li>61 pull requests, which are the culmination of those experiments in forked repositories. They are basically people asking to have their local modifications merged with the main boto repository. Thus far 50 have been closed, 11 are still open.</li>
<li>340 commits to the repository by 35 different contributors. That's about 1.7 commits per day. Commits have ranged from single line typo fixes to entire new boto modules.</li>
<li>Major contributions from Google in support of their Google Storage service.</li>
<li>11495 downloads of packaged boto releases from our Google code project page.</li>
<li>35978 downloads of just the 2.0b3 packaged release from pypi.python.org, the Python package index</li>
<li>42,420 visits (124,278 page views) by 10,734 unique visitors to our Google project page and 13,620 views of our github project page in the last 90 days.</li>
<li>I received three, count them three, unsolicited contributions for a boto module to support the new <a href="http://aws.amazon.com/ses/">Simple Email Service</a> from AWS within 24 hours of the services announcement.</li>
</ul><div>I'm not suggesting that all of this is due to github. It certainly helps that boto provides an interface to a very popular set of cloud-based services in a very popular programming language. But github has clearly been a factor in building the community and increasing contributions.</div><div><br />
</div><div>So, thanks github and thanks to the boto community. I also want to take this opportunity to thank the my colleagues and the management team at <a href="http://eucalyptus.com/">Eucalyptus Systems</a> for supporting me in my efforts to support the boto community. It really underscores their commitment to open source software.</div>Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com6tag:blogger.com,1999:blog-1231480044619721857.post-33644795143277372922010-12-05T14:22:00.001-05:002010-12-05T15:07:40.474-05:00S3 MultiPart Upload in botoAmazon recently introduced MultiPart Upload to S3. This new feature lets you upload large files in multiple parts rather than in one big chunk. This provides two main benefits:<br />
<br />
<ul><li>You can get resumable uploads and don't have to worry about high-stakes uploading of a 5GB file which might fail after 4.9GB. Instead, you can upload in parts and know that the all of the parts that have successfully uploaded are there patiently waiting for the rest of the bytes to make it to S3.</li>
<li>You can parallelize your upload operation. So, not only can you break your 5GB file into 1000 5MB chunks, you can run 20 uploader processes and get much better overall throughput to S3.</li>
</ul>It took a few weeks but we have just added full support for MultiPart Upload to the boto library. This post gives a very quick intro to the new functionality to help get you started.<br />
<br />
Below is a transcript from an interactive IPython session that exercises the new features. Below that is a line by line commentary of what's going on.<br />
<br />
<script src="https://gist.github.com/729279.js?file=boto_mpupload_1"></script><br />
<br />
<ol><li>Self-explanatory, I hope 8^)</li>
<li>We create a connection to the S3 service and assign it to the variable <code>c</code>.</li>
<li>We lookup an existing bucket in S3 and assign that to the variable <code>b</code>.</li>
<li>We initiate a MultiPart Upload to bucket <code>b</code>. We pass in the <code>key_name</code>. This <code>key_name</code> will be the name of the object in S3 once all of the parts are uploaded. This creates a new instance of a MultiPartUpload object and assigns it to the variable <code>mp</code>.</li>
<li>You might want to do a bit of exploration of the new object. In particular, it has an attribute called <code>id</code> which is the upload transaction ID assigned by S3. This transaction ID must accompany all subsequent requests related to this MultiPart Upload.</li>
<li>I open a local file. In this case, I had a 17MB PDF file. I split that into 5MB chunks using the split command ("<code>split -b5m test.pdf</code>"). This creates 3 5MB chunks and one smaller chunk with the leftovers. You can use larger chunk sizes if you want but 5MB is the minimum size (except for the last, of course).</li>
<li>I upload this chunk to S3 using the <code>upload_part_from_file</code> method of the MultiPartUpload object.</li>
<li>Close the filepointer</li>
<li>Open the file for the second chunk.</li>
<li>Upload it.</li>
<li>Close it.</li>
<li>Open the file for the third chunk.</li>
<li>Upload it.</li>
<li>Close it.</li>
<li>Open the file for the fourth and final chunk (the small one).</li>
<li>Upload it.</li>
<li>Close it.</li>
<li>I can now examine all of the parts that are currently uploaded to S3 related to this key_name. As you can see, I can use the MultiPartUpload object as an iterator and, when so doing, the generator object handles any pagination of results from S3 automatically. Each object in the list is an instance of the Part class and, as you can see, have attributes such as <code>part_number, size, etag</code>.</li>
<li>Now that the last part has been uploaded I can complete the MultiPart Upload transaction by calling the <code>complete_upload method</code> of the MultiPartUpload object. If, on the other hand, I wanted to cancel the operation I could call <code>cancel_upload</code> and all of the parts that had been uploaded would be deleted in S3.</li>
</ol>This provides a simple example. However, to really benefit fully from the MultiPart Upload functionality, you should consider trying to introduce some concurrency into the mix. Either fire off separate threads or subprocesses to upload different parts in parallel. The actual order the parts are uploaded doesn't matter as long as they are numbered sequentially.<br />
<br />
<h3>Update</h3>To find all of the current MultiPart Upload transactions for a given bucket, you can do this:<br />
<br />
<script src="https://gist.github.com/729419.js?file=boto_mpupload_2"></script>Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com16tag:blogger.com,1999:blog-1231480044619721857.post-88142807130956675382010-09-15T19:21:00.002-04:002010-09-16T13:09:37.044-04:00Using Identity & Access Management (IAM) Service in botoThe recently announced <a href="http://aws.amazon.com/iam">Identity and Access Management service</a> from AWS provides a whole bunch of useful and long-requested functionality. The boto library provides full support for IAM and this article provides a quick intro to some basic IAM capabilities and how to access them via boto.<br />
<br />
IAM introduces a new concept to AWS; <b>users</b>. Prior to IAM, you created an account and that account had the necessary credentials for accessing various AWS services (via the access key and secret key or X.509 cert associated with your account) and also acted as a billing entity (you get a bill from AWS). Conflating these two concepts causes problems, especially if you want to use AWS within businesses and enterprises. In those environments, the people who use the services and the people who manage and pay for the services are very distinct. With IAM, AWS has introduced the notion of a <b>user</b> who has the necessary credentials to use AWS services but accounting and billing are handled by the controlling AWS account. This distinction between accounts and users is actually fundamental to IAM and important to understand.<br />
<br />
Based on that description, it's clear that IAM can be used as a user provisioning system for AWS. Using the API, you can provision new AWS users, create credentials for the user (both for the AWS Console web site as well as for the API), create X.509 certs with the user or associate existing certs and even manage the Multi-Factor Authentication (MFA) devices associated with the user. In addition, you can create groups, add and remove users from those groups and associate policies with groups to control which services and resources member of a group have access to. And all of the users are created under the control of a single master account which ultimately owns all resources created by all users and gets the AWS bill for all users in one monthly statement.<br />
<br />
So, clearly if you are a business (large or small) and want to automate the process of user management and have visibility into the resources and costs across your entire organization, IAM is great. But, even if you are an individual developer, IAM provides some important features that have been conspicuously absent from AWS up till now.<br />
<br />
If you read my previous posts about managing your AWS credentials (<a href="http://www.elastician.com/2009/06/managing-your-aws-credentials-part-1.html">part1</a> and <a href="http://www.elastician.com/2009/06/managing-your-aws-credentials-part-2.html">part2</a>) you will probably remember some of the hoops we had to jump through to find a way to safely manage AWS credentials on EC2 instances. And even with all of that hoop-jumping, we couldn't really come up with a perfect solution. But with IAM's ability to create users with very limited capabilities, we finally have an elegant way to solve the problem.<br />
<br />
I'm going to show a few code examples that illustrate how to accomplish some simple but useful things in IAM using boto. Before we delve into those examples, though, I want to talk a little bit about the <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">iam</span> module in boto because it uses a different approach than other boto modules. Depending on the reaction, this approach may be expanded to other modules in the future.<br />
<br />
Using boto, you make requests to services and they send responses back to you. For AWS, the responses are XML documents that contain the information you requested. The standard approach to handling these responses in boto has been to write a small Python class for each possible response type. The class is then responsible for parsing the XML and extracting the pertinent values and storing them as attributes on the Python object. Users then interact with the Python objects and never see the XML. This approach works well but the downside is that it requires a lot of small, hand-coded Python objects to be written which takes time.<br />
<br />
For the iam module, I wrote a generic response handler that parses the XML and turns it into native Python data structure. So, if the following XML is returned from the service:<br />
<br />
<script src="http://gist.github.com/581410.js">
</script><br />
<br />
The the generic response parser will return the following Python data structure:<br />
<br />
<script src="http://gist.github.com/581462.js">
</script><br />
<br />
As you can see, the Python data structure is deeply nested. To make it easier to get to the stuff you want, I've added a little magic to allow you to directly access any key, regardless of the depth, by simply accessing it as an attribute. So, if you did something like this:<br />
<br />
<script src="http://gist.github.com/581477.js">
</script><br />
<br />
I'd love feedback on this approach. Feel free to comment to this post or post to the boto users Google group. Now, on to the examples.<br />
<br />
<span class="Apple-style-span" style="font-size: x-large;">Create An Admin Group</span><br />
<br />
This example shows how to create a group that is authorized to access all actions supported by IAM. This would allow you to defer user/group management to another person or group of people.<br />
<br />
<script src="http://gist.github.com/580756.js">
</script><br />
<br />
<span class="Apple-style-span" style="font-size: x-large;">Create a Group for EC2 / S3 Users</span><br />
<br />
This example shows how to create a group and user that has full access to all EC2 functionality and S3 functionality but nothing else.<br />
<br />
<script src="http://gist.github.com/580250.js">
</script><br />
<br />
<span class="Apple-style-span" style="font-size: x-large;">Create a Group for Read Only Access to SimpleDB Domain</span><br />
<br />
This example illustrates how you can use IAM to solve some of those credential problems we discussed earlier. Assume that you have a SimpleDB domain that contains important information needed by an application running on EC2 instances. To query the domain, you need to have AWS credentials on the EC2 instances but you really don't want to put your main AWS credentials on there because a bad guy could do all kinds of damage with those credentials. IAM, to the rescue! We can create a group that has read-only access to the specific domain it needs to access and is authorized to use only the GetAttribute and Select requests from SimpleDB. Even if a bad guy gets those credentials, they really can't do any damage. Here's how to set that up in IAM.<br />
<br />
<script src="http://gist.github.com/581210.js">
</script>Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com13tag:blogger.com,1999:blog-1231480044619721857.post-80858948446333703342010-07-02T12:55:00.000-04:002010-07-02T12:55:13.318-04:00And Now For Something Completely Different...As some of you may know, I've spent the past three years or so focused on AWS-related consulting through my own little company, CloudRight. It's been fun and exciting and I feel that I've really had a front row seat for the amazing growth and excitement around cloud computing. But consulting has it's downsides, too. After a while the pace of new projects started to lose it's lustre and I found myself pining for the fjords, or at least for a bit more focus in my professional life.<br />
<br />
So, I'm excited to say that I have joined the development team at <a href="http://eucalyptus.com/">Eucalyptus</a>. I like their technology, I like their positioning in the marketplace, I like their commitment to open source but mainly I just really like the team. Everyone there is not only great at what they do, they are also great people and in my experience that's the recipe for a great company. I'm absolutely thrilled to be a part of it.<br />
<br />
My main focus at Eucalyptus will be in the area of tools. Basically trying to make sure that all of the capabilities of the core system are easily and consistently accessible to users and administrators. The current Euca2ools command line utilities are a great start but we all feel there is an opportunity to do a lot more.<br />
<br />
This is also great news for <a href="http://boto.googlecode.com/">boto</a>. Euca2ools are built on top of boto so, for the first time, boto will actually be a part of my day job rather than something I try to squeeze in between gigs and after hours. That should mean more frequent and consistent releases and better quality overall.<br />
<br />
And now, it's time for the traditional "new job" <a href="http://www.youtube.com/watch?v=IhJQp-q1Y1s">fish slapping dance</a>...Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com4tag:blogger.com,1999:blog-1231480044619721857.post-89505972610737469932010-06-13T10:18:00.003-04:002010-06-15T18:46:34.681-04:00Using Reduced Redundancy Storage (RRS) in S3This is just a quick blog post to provide a few examples of using the new <a href="http://aws.amazon.com/about-aws/whats-new/2010/05/19/announcing-amazon-s3-reduced-redundancy-storage/">Reduced Redundancy Storage</a> (RRS) feature of S3 in boto. This new storage class in S3 gives you the option to tradeoff redundancy for cost. The normal S3 service (and corresponding pricing) is based on a <s>12-nines</s> 11 nines (yes, that's 99.999999999% - <i>Thanks to Jeff Barr for correction in comments below</i>) level of durability. In order to achieve this extremely highly level of reliability, the S3 service must incorporate a high-level of redundancy. In other words, it keeps many copies of your data in many different locations so that even if multiple locations encounter failures, your data will still be safe. <br />
<br />
That's a great feature but not everyone needs that level of redundancy. If you already have copies of your data locally and are just using S3 as a convenient place to store data that is actively being accessed by services within the AWS infrastructure, RRS may be for you. It provides a much lower level of durability (99.99%) at a significantly lower cost. If that fits the bill for you, the next three code snippets will provide you with the basics you need to start using RRS in boto.<br />
<br />
<span class="Apple-style-span" style="font-size: x-large;">Create a New S3 Key Using the RRS Storage Class</span><br />
<script src="http://gist.github.com/411851.js?file=create_rrs_key.py">
</script><br />
<br />
<span class="Apple-style-span" style="font-size: x-large;">Convert An Existing S3 Key from Standard Storage Class to RRS</span><br />
<script src="http://gist.github.com/411994.js?file=convert_key_to_rrs.py">
</script><br />
<br />
<span class="Apple-style-span" style="font-size: x-large;">Create a Copy of an Existing S3 Key Using RRS</span><br />
<script src="http://gist.github.com/411893.js?file=copy_to_rrs_key.py">
</script>Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com6tag:blogger.com,1999:blog-1231480044619721857.post-48022065784689970762010-06-04T11:14:00.003-04:002010-06-04T11:28:03.576-04:00AWS By The NumbersI recently gave a short talk about Amazon Web Services at <a href="http://www.gluecon.com/2010">GlueCon 2010</a>. It was part of a panel discussion called "Major Platform Providers" and included similar short talks from others about Azure, Force.com and vCloud. It's very hard (i.e. impossible) to give a meaningful technical overview of AWS in 10 minutes so I struggled a bit trying to decide what to talk about. In the end, I decided to try to come up with some quantitative data to describe Amazon Web Services. My goal was to try to show that AWS is:<br />
<br />
<ul><li>A first mover - AWS introduced their first web services in 2005</li>
<li>A broad offering - 13 services currently available</li>
<li>Popular - details of how I measure that described below</li>
<li>Prolific - the pace of innovation from AWS is impressive</li>
</ul><br />
After the conference, I was going to post my slides but I realized they didn't really work that well on their own so I decided instead to turn the slides into a blog post. That gives me the opportunity to explain the data and resulting graphs in more detail and also allows me to provide the graphs in a more interactive form.<br />
<br />
<span class="Apple-style-span" style="font-size: x-large;">Data? What data?</span><br />
<br />
The first challenge in trying to do a data-heavy talk about AWS is actually finding some data. Most of the data that I would really like to have (e.g. # users, # requests, etc.) is not available. So, I needed to find some publicly available data that could provide some useful insight. Here's what I came up with:<br />
<br />
<ul><li>Forum data - I scraped the AWS developer forums and grabbed lots of useful info. I use things like forum views, number of messages and threads, etc. to act as a proxy for service popularity. It's not perfect by any means, but it's the best I could come up with.</li>
<li>AWS press releases - I analyzed press releases from 2005 to the present day and use that to populate a spreadsheet of significant service and feature releases.</li>
<li>API WSDL's - I parsed the WSDL for each of the services to gather data about API complexity.</li>
</ul>With that background, let's get on to the data.<br />
<br />
<span class="Apple-style-span" style="font-size: x-large;">Service Introduction and Popularity</span><br />
<br />
This first graph uses data scraped from the forums. Each line in the graph represents one service and the Y axis is the total number of messages in that services forum for the given month. The idea is that the volume of messages on a forum should have some relationship to the number of people using the service and, therefore, the popularity of the service. Following the timeline across also shows the date of introduction for each of the services.<br />
<br />
<i>Note: If you have trouble loading the following graph, try going directly to the Google Docs <a href="https://spreadsheets3.google.com/ccc?key=tFgHnLxlTcANXxcpX5bYPUw&hl=en">spreadsheet</a> which I have shared.</i><br />
<br />
<script src="https://spreadsheets.google.com/gpub?url=http%3A%2F%2Ftbaoebshgeq225lhq2bam0m0a5mf6u0b.spreadsheets.gmodules.com%2Fgadgets%2Fifr%3Fup__table_query_url%3Dhttps%253A%252F%252Fspreadsheets.google.com%252Ftq%253Frange%253DA1%25253AM49%2526headers%253D-1%2526gid%253D0%2526key%253D0AlAkiTyuDuDRdEZnSG5MeGxUY0FOWHhjcFg1YllQVXc%2526pub%253D1%26up_title%3DAWS%2520Forum%2520Messages%26up__table_query_refresh_interval%3D300%26up_scale%3Dfixed%26up_values_suffix%26up_annotations_width%3D25%26up_display_zoom_buttons%3D1%26up_display_exact_values%3D0%26up_display_annotations_filter%3D0%26up_display_legend_inNewline%3D1%26url%3Dhttp%253A%252F%252Fwww.google.com%252Fig%252Fmodules%252Ftime-series-line.xml&height=656&width=1274"></script><br />
<br />
<br />
The following graph shows another, simpler view of the forum data. This view plots the average number of views on the forum for each service normalized.<br />
<br />
<img src="https://spreadsheets2.google.com/oimg?key=0AlAkiTyuDuDRdDhsQVpKSjR2ckNXamd0R0YwaWhpVXc&oid=2&v=1275664268370" /><br />
<br />
<span class="Apple-style-span" style="font-size: x-large;">API Complexity</span><br />
<br />
Another piece of publicly available data for AWS is the WSDL for each service. The WSDL is an XML document that describes the operations supported by the service and the data types used by the operations. The following graph shows the API Complexity (measured as the number of operations) for each of the services.<br />
<br />
<img src="https://spreadsheets1.google.com/oimg?key=0AlAkiTyuDuDRdGU1djI5ZlhOT2ZsMnNDdW5pWjk2Qmc&oid=2&v=1275664316848" /><br />
<br />
<span class="Apple-style-span" style="font-size: x-large;">Velocity</span><br />
<br />
Finally, I wanted to try to measure the pace of innovation by AWS. To do this, I used the spreadsheet I created that tracked all significant service and feature announcements by AWS. I then counted the number of events per quarter for AWS and used that to compute an agile-style velocity. <br />
<br />
<img src="https://spreadsheets0.google.com/oimg?key=0AlAkiTyuDuDRdFVBV3NiM0Q4c1htUHRsMHltTUxPX2c&oid=2&v=1275664346806" /><br />
<br />
<span class="Apple-style-span" style="font-size: x-large;">Summary</span><br />
<br />
Hopefully these graphs are interesting and help to prove the points that I outlined at the beginning of the talk. I actually have a lot more data available from the forum scrapping and may try to mine that in different ways later.<br />
<br />
While this data was all about AWS, I think the bigger point is that the level of interest and innovation in Amazon's services is really just an indicator of a trend across the cloud computing market.Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com0tag:blogger.com,1999:blog-1231480044619721857.post-38744119734572269552010-05-23T19:29:00.000-04:002010-05-23T19:29:17.588-04:00Boto and Google StorageYou probably noticed, in the blitz of announcements from the recent <a href="http://code.google.com/events/io/2010/">I/O conference</a> that Google now has a <a href="http://code.google.com/apis/storage/">storage service</a> very similar to Amazon's S3 service. The Google Storage (GS) service provides a REST API that is compatible with many existing tools and libraries.<br />
<br />
In addition to the API, Google also announced some tools to make it easier for people to get started using the Google Storage service. The main tool is called <a href="http://code.google.com/apis/storage/docs/gsutil.html">gsutil</a> and it provides a command line interface to both Google Storage and S3. It allows you to reference files in GS or S3 or even on your file system using URL-style identifiers. You can then use these identifiers to copy content to/from the storage services and your local file system, between locations within a storage service or even between the services. Cool!<br />
<br />
What was even cooler to me personally was that gsutil leverages <a href="http://boto.googlecode.com/">boto</a> for API-level communication with S3 and GS. In addition, Google engineers have extended boto with a higher-level abstraction of storage services that implements the URL-style identifiers. The command line tools are then built on top of this layer.<br />
<br />
As an open source developer, it is very satisfying when other developers use your code to do something interesting and this is certainly no exception. In addition, I want to thank Mike Schwartz from Google for reaching out to me prior to the Google Storage session and giving me a heads up on what they were going to announce. Since that time Mike and I have been collaborating to try to figure out the best way to support the use of boto in the Google Storage utilities. For example, the storage abstraction layer developed by Google to extend boto is generally useful and could be extended to other storage services.<br />
<br />
In summary, I view this as a very positive step in the boto project. I look forward to working with Google to make boto more useful for them and for the community of boto users. And as always, feedback from the boto community is not only welcome but essential.Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com3tag:blogger.com,1999:blog-1231480044619721857.post-65861951462706530952010-04-20T10:31:00.001-04:002010-04-20T10:34:54.332-04:00Failure as a FeatureOne need only peruse the EC2 forums a bit to realize that EC2 instances fail. Shock. Horror. Servers failing? What kind of crappy service is this, anyway. The truth, of course, is that all servers can and eventually will fail. EC2 instances, Rackspace CloudServers, GoGrid servers, Terremark virtual machines, even that trusty Sun box sitting in your colo. They all can fail and therefore they all will fail eventually.<br />
<br />
What's wonderful and transformative about running your applications in public clouds like EC2 and CloudServers, etc. is not that the servers never fail but that when they do fail you can actually do something about it. Quickly. And programmatically. From an operations point of view, the killer feature of the cloud is the API. Using the API's, I can not only detect that there is a problem with a server but I can actually correct it. As easily as I can start a server, I can stop one and replace it with a new one.<br />
<br />
Now, to do this effectively I really need to think about my application and my deployment differently. When you have physical servers in a colo failure of a server is, well, failure. It's something to be dreaded. Something that you worry about. Something that usually requires money and trips to the data center to fix.<br />
<br />
But for apps deployed on the cloud, failure is a feature. Seriously. Knowing that any server can fail at any time and knowing that I can detect that and correct that programmatically actually allows me to design better apps. More reliable apps. More resilient and robust apps. Apps that are designed to keep running with nary a blip when an individual server goes belly up.<br />
<br />
Trust me. Failure is a feature. Embrace it. If you don't understand that, you don't understand the cloud.Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com5tag:blogger.com,1999:blog-1231480044619721857.post-24858897716484578962010-04-19T22:50:00.000-04:002010-04-19T22:50:24.676-04:00Subscribing an SQS queue to an SNS topicThe new <a href="http://aws.amazon.com/sns/">Simple Notification Service</a> from AWS offers a very simple and scalable publish/subscribe service for notifications. The basic idea behind SNS is simple. You can create a topic. Then, you can subscribe any number of subscribers to this topic. Finally, you can publish data to the topic and each subscriber will be notified about the new data that has been published.<br />
<br />
Currently, the notification mechanism supports email, http(s) and SQS. The SQS support is attractive because it means you can subscribe an existing SQS queue to a topic in SNS and every time information is published to that topic, a new message will be posted to SQS. That allows you to easily persist the notifications so that they could be logged or further processed at a later time.<br />
<br />
Subscribing via the email protocol is very straightforward. You just provide an email address and SNS will send an email message to the address each time information is published to the topic (actually there is a confirmation step that happens first, also via email). Subscribing via HTTP(s) is also easy, you just provide the URL you want SNS to use and then each time information is published to the topic, SNS will POST a JSON payload containing the new information to your URL.<br />
<br />
Subscribing an SQS queue, however, is a bit trickier. First, you have to be able to construct the ARN (Amazon Resource Name) of the SQS queue. Secondly, after subscribing the queue you have to set the ACL policy of the queue to allow SNS to send messages to the queue.<br />
<br />
To make it easier, I added a new convenience method in the boto SNS module called <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">subscribe_sqs_queue</span>. You pass it the ARN of the SNS topic and the boto Queue object representing the queue and it does all of the hard work for you. You would call the method like this:<br />
<br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">>>> import boto</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">>>> sns = boto.connect_sns()</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">>>> sqs = boto.connect_sqs()</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">>>> queue = sqs.lookup('TestSNSNotification')</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">>>> resp = sns.create_topic('TestSQSTopic')</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">>>> print resp</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;"></span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">{u'CreateTopicResponse': {u'CreateTopicResult': {u'TopicArn': u'arn:aws:sns:us-east-1:963068290131:TestSQSTopic'},<br />
u'ResponseMetadata': {u'RequestId': u'1b0462af-4c24-11df-85e6-1f98aa81cd11'}}}<br />
>>> sns.subscribe_sqs_queue('arn:aws:sns:us-east-1:963068290131:TestSQSTopic', queue)<br />
<br />
</span><br />
That should be all you have to do to subscribe your SQS queue to an SNS topic. The basic operations performed are:<br />
<br />
<ol><li>Construct the ARN for the SQS queue. In our example the URL for the queue is <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">https://queue.amazonaws.com/963068290131/</span><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">TestSNSNotification</span> but the ARN would be "<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">arn:aws:sqs:us-east-1:963068290131:</span><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">TestSNSNotification</span>"</li>
<li>Subscribe the SQS queue to the SNS topic</li>
<li>Construct a JSON policy that grants permission to SNS to perform a SendMessage operation on the queue. See below for an example of the JSON policy.</li>
<li>Associate the new policy with the SQS queue by calling the <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">set_attribute method</span> of the Queue object with an attribute name of "Policy" and the attribute value being the JSON policy.</li>
</ol><br />
The actual policy looks like this:<br />
<br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">{"Version": "2008-10-17", "Statement": [{"Resource": "arn:aws:sqs:us-east-1:963068290131:TestSNSNotification", "Effect": "Allow", "Sid": "ad279892-1597-46f8-922c-eb2b545a14a8", "Action": "SQS:SendMessage", "Condition": {"StringLike": {"aws:SourceArn": "arn:aws:sns:us-east-1:963068290131:TestSQSTopic"}}, "Principal": {"AWS": "*"}}]}</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;"><br />
</span><br />
<span class="Apple-style-span" style="font-family: Times, 'Times New Roman', serif;">The new </span><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">subscribe_sqs_queue</span><span class="Apple-style-span" style="font-family: Times, 'Times New Roman', serif;"> method is available in the current SVN trunk. Check it out and let me know if you run into any problems or have any questions.</span>Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com7tag:blogger.com,1999:blog-1231480044619721857.post-30944849694849704022010-02-25T09:05:00.002-05:002010-02-25T09:09:02.191-05:00Stupid Boto Tricks #2 - Reliable Counters in SimpleDBAs a follow-up to <a href="http://bit.ly/dtFh3a">yesterday's article</a> about the new consistency features in SimpleDB, I came up with a handy little class in Python to implement a reliable integer counter in SimpleDB. The Counter class makes use of the consistent reads and conditional puts now available in SimpleDB to create a very Pythonic object that acts like an integer object in many ways but also manages the synchronization with the "true" counter object stored in SimpleDB.<br />
<br />
The source code can be found in <a href="http://bitbucket.org/mitch/stupidbototricks/src/tip/counter.py#">my bitbucket.org repo</a>. I have copied the doc string from the class below to give an example of how the class can be used. Comments, questions and criticisms welcome. As with all Stupid Boto Tricks, remember the code is hot off the presses. Use with appropriate skepticism.<br />
<br />
<div><pre><code>
A consistent integer counter implemented in SimpleDB using new
consistent read and conditional put features.
Usage
-----
To create the counter initially, you need to instantiate a Counter
object, passing in the name of the SimpleDB domain in which you wish
to store the counter, the name of the of the counter within the
domain and the initial value of the counter.
>>> import counter
>>> c = counter.Counter('mydomain', 'counter1', 0)
>>> print c
0
>>>
You can now increment and decrement the counter object using
the standard Python operators:
>>> c += 1
>>> print c
1
>>> c -= 1
>>> print c
0
These operations are automatically updating the value in SimpleDB
and also checking for consistency. You can also use the Counter
object as an int in normal Python comparisons:
>>> c == 0
True
>>> c < 1
True
>>> c != 0
False
If you have multiple processes accessing the same counter
object it will be possible for your view of the Python to become
out of sync with the value in SimpleDB. If this happens, it will
be automatically detected by the Counter object. A ValueError
exception will be raised and the current state of your Counter
object will be updated to reflect the most recent value stored
in SimpleDB.
>>> c += 1
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
...
ValueError: Counter was out of sync
>>> print c
2
>>>
In addition to storing the value of the counter in SimpleDB, the
Counter also stores a timestamp of the last update in the form of
an ISO8601 string. You can access the timestamp using the
timestamp attribute of the Counter object:
>>> c.timestamp
'2010-02-25T13:49:15.561674'
>>>
</code></pre></div>Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com1tag:blogger.com,1999:blog-1231480044619721857.post-72224227679310222472010-02-24T18:08:00.002-05:002010-02-24T18:11:27.719-05:00Pick Your SimpleDB Flavor: AP or CP?Back around 2000, a fellow named Eric Brewer posited something called the CAP theorem. The basic tenants of this theorem are that in the world of shared data, distributed computing there are three basic properties; data consistency, system availability and tolerance to network partitioning, and only 2 of the 3 properties can be achieved at any given time (see <a href="http://www.allthingsdistributed.com/2008/12/eventually_consistent.html">Werner Vogel's article</a> or <a href="http://portal.acm.org/citation.cfm?doid=564585.564601">this paper</a> for more details on CAP).<br />
<br />
SimpleDB is a great service from AWS that provides a fast, scalable metadata store that I find useful in many different systems and applications. When viewed through the prism of the CAP theorem, SimpleDB provides system availability (A) and tolerance to network partitioning (P) at the expense of consistency (C). So, as a AP system it means users have to understand and deal with the lack of consistency or "eventual consistency". For many types of systems, this lack of consistency is not a problem and given that the vast majority of writes to SimpleDB are consistent in a short period of time (most in less than a second) it's not a big deal.<br />
<br />
But what happens if you really do need consistency? For example, let's say you want to store a user's session state in SimpleDB. Each time the user makes another request on your web site you will want to pull their saved session data from the database. But if that state is not guaranteed to be the most current data written it will cause problems for your user. Or you may have a requirement to implement an incrementing counter. Without consistency, such a requirement would be impossible. Which would mean that using SimpleDB for those types of applications would be out of the question. Until now...<br />
<br />
<span class="Apple-style-span" style="font-size: x-large;">Pick Your Flavor</span><br />
<span class="Apple-style-span" style="font-size: x-large;"><br />
</span><br />
SimpleDB now provides a new set of API requests that let you perform reads and writes in a consistent manner (see <a href="http://developer.amazonwebservices.com/connect/ann.jspa?annID=611">this</a> for details). For example, I can now look up an item in SimpleDB or perform a search and specify that I want the results to be consistent. By specifying a consistent flag in these requests, SimpleDB will guarantee that the results returned will be consistent with all write operations received by the SimpleDB prior to the read or query request.<br />
<br />
Similarly, you can create or update a value of an object in SimpleDB and provide with the request information about what you expect the current value of that object to be. If your expected values differ from the actual values currently stored in SimpleDB, an exception will be raised and the value will not be updated.<br />
<br />
Of course, nothing is free. By insisting on Consistency, the CAP theorem says that we must be giving up on one of the other properties. In this case, we are giving up on is Availability. Basically, if we want the system to give us consistent data then it simply won't be able to respond as quickly as before. It will have to wait until it knows the state is consistent and while it is waiting, the system is unavailable to your application. Of course, that's exactly how every relational database you have ever used works so that should be no surprise. But if performance and availability are your main goals, you should use these Consistency features sparingly.<br />
<br />
<span class="Apple-style-span" style="font-size: x-large;">Give It A Try</span><br />
<br />
The boto subversion repository has already been updated with code that supports these new consistency features. The API changes are actually quite small; a new, optional consistent_read parameter to methods like <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">get_attributes</span> and <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">select</span> and a new, optional expected_values parameter to methods like <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">put_attributes</span> and <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">delete_attributes</span>. I'll be posting some example code here soon.Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com7tag:blogger.com,1999:blog-1231480044619721857.post-59616063394351171112010-02-15T09:48:00.001-05:002010-02-15T09:48:55.125-05:00The Softer Side of ScaleIn Lori MacVittie's latest blog, "<a href="http://devcentral.f5.com/weblogs/macvittie/archive/2010/02/15/the-devil-is-in-the-details.aspx">The Devil Is In The Details</a>" she not only bestows upon me the honor of my own Theorem (yeah, in your face Pythagorus) she also gives a number of great examples of some of the necessary dimensions of scale beyond just the number of servers.<br />
<br />
But besides things like networking and bandwidth, there is a softer side of scale that is equally important: people. You need a certain critical mass of support, billing, operations, development teams, security, sales, developer support, evangelists, etc. to create a viable service offering and economies of scale apply to these dimensions just as in hardware.<br />
<br />
There may be niche markets where small providers can provide some unique value-add (specialized security procedures, vertical focus, non-standard technology stacks, etc.) but in general I think the dominance of scale is inevitable. As a developer I love the flexibility and programmability of cloud computing services but ultimately the trump card for businesses is cost and the best way to drive cost down is via scale.<br />
<br />
Over the next five years, I think the majority of cloud computing will happen on public clouds and that the public cloud landscape will consist mainly of a relatively small number of big players who will be able to scale their services, both the hard side and the soft side, to achieve the economies of scale required in the marketplace.Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com0tag:blogger.com,1999:blog-1231480044619721857.post-58768246690647448262010-02-09T11:18:00.002-05:002010-02-09T11:40:40.594-05:00Using S3 Versioning and MFA to CMA*<i>* - CMA = Cover My Ass</i><br />
<i><br />
</i><br />
Amazon's Simple Storage Service (S3) is a great way to safely store loads of data in the cloud. It's highly available, simple to use and provides good data durability by automatically copying your data across multiple regions and/or zones. With over 80 billion objects stored (at last published count) I'm clearly not alone in thinking it's a good thing.<br />
<br />
The only problem I've had with S3 over the years is the queazy feeling I get when I think about some nefarious individual getting hold of my AWS <a href="http://www.elastician.com/2009/06/managing-your-aws-credentials-part-1.html">AccessKey/SecretKey</a>. Since all S3 capabilities are accessed via a REST API and since that credential pair is used to authenticate all requests with S3, a bad guy/girl with my credentials (or a temporarily stupid version of me) could potentially delete all of the content I have stored in S3. That represents the "Worst Case Scenario" of S3 usage and I've spent a considerable amount of time and effort trying to find ways to mitigate this risk.<br />
<br />
Using multiple AWS accounts can help. The <a href="http://aws.amazon.com/importexport/">Import/Export</a> feature is another way to mitigate your exposure. But what I've always wanted was a WORM (Write Once Read Many) bucket. Well, not always, but at least since <a href="http://developer.amazonwebservices.com/connect/thread.jspa?messageID=58563&#58563">May 6, 2007</a>. That would give me confidence that the data I store in S3 could not be accidentally or maliciously deleted. This kind of feature would also provide some interesting functionality for certain types of compliance and regulatory solutions.<br />
<br />
Starting today, AWS has released a couple of really useful new features in S3: Versioning and MFADelete. Together, these features provide just about everything I wanted when I asked for a WORM bucket. So, how do they work?<br />
<br />
<span class="Apple-style-span" style="font-size: x-large;">Versioning</span><br />
<br />
Versioning allows you to have multiple copies of the same object. Each version has a unique version ID and the versions are kept in ascending order by the date the version was created. Each bucket can be configured to either enable or disable versioning (only by the bucket owner) and the basic behavior is shown below in the table. The behavior of a Versioned bucket differs based on whether it is being accessed by a Version-Aware (VA) client or NonVersion-Aware (NVA) client.<br />
<br />
<table cellpadding="4"><tbody>
<tr align="center"> <th>Operation</th> <th>Unversioned Bucket</th> <th>Versioned Bucket - NVA Client</th> <th>Versioned Bucket - VA Client</th> </tr>
<tr> <td>GET</td> <td>Retrieves the object or a 404 if the object is not found</td> <td>Retrieves the latest version or a 404 if a Delete Marker is found </td> <td>Retrieves the version specified by provided version ID</td> </tr>
<tr> <td>PUT</td> <td>Stores the content in the bucket, overwriting any existing content</td> <td>Stores content as new version</td> <td>Stores content as new version</td> </tr>
<tr> <td>DELETE</td> <td>Irrevocably deletes the content</td> <td>Stores a DeleteMarker as latest version of object.</td> <td>Permanently deletes version specified by provided version ID</td> </tr>
</tbody></table><br />
The above table is just a summary. You should see the <a href="http://developer.amazonwebservices.com/connect/ann.jspa?annID=599">S3 documentation</a> for full details but even this summary clearly shows the benefits of versioning. If I enable versioning on a bucket, the chance of accidentally deleting content is greatly reduced. I would have to be using a version-aware delete tool and explicitly referencing individual version ID's to permanently delete them.<br />
<br />
So, accidental deletion of content is less of a risk with versioning but how about the other risk? If a bad guy/girl gets my AccessKey/SecretKey, they can still delete all of my content as long as they know how to use the versioning feature of S3. To address this threat, S3 has implemented a new feature called MFADelete.<br />
<br />
<span class="Apple-style-span" style="font-size: x-large;">MFADelete</span><br />
<br />
MFADelete uses the <a href="http://www.elastician.com/2009/10/managing-your-aws-credentials-part-3.html">Multi-Factor Authentication device</a> you are already using to protect AWS Portal and Console access. What? You aren't using the MFA device? Well, you should go sign up for one right now. It's well worth the money, especially if you are storing important content in S3.<br />
<br />
Like Versioning, MFADelete can be enabled on a bucket-by-bucket basis and only by the owner of the bucket. But, rather than just trusting that the person with the AccessKey/SecretKey is the owner, MFADelete uses the MFA device to provide an additional factor of authentication. To enable MFADelete, you send a special PUT request to S3 with an XML body that looks like this:<br />
<span class="Apple-style-span" style="font-family: Times, 'Times New Roman', serif;"><br />
</span><br />
<span class="Apple-style-span" style="font-family: Times, 'Times New Roman', serif;"><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;"><?xml version="1.0" encoding="UTF-8"?><br />
<br />
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><br />
<br />
<Status>Enabled</Status><br />
<br />
<MfaDelete>Enabled</MfaDelete><br />
<br />
</VersioningConfiguration></span><br />
<br />
In addition to this XML body, you also need to send a special HTTP header in the request, like this:</span><br />
<span class="Apple-style-span" style="font-family: Times, 'Times New Roman', serif;"><br />
</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">x-amz-mfa: <serial number of MFA device> <token from MFA device></span><br />
<span class="Apple-style-span" style="font-family: Times, 'Times New Roman', serif;"><br />
</span><br />
Once this request has been sent, all delete operations on the bucket and all requests to change the MFADelete status for the bucket will also require the special HTTP header with the MFA information. So, that means that even if the bad guy/girl gets your AccessKey/SecretKey combo they still won't be able to delete anything from your MFADelete-enabled bucket without the MFA device, as well.<br />
<br />
It's not exactly the WORM bucket I was originally hoping for but it's a huge improvement and greatly reduces the risk of accidental or malicious deletion of data from S3. I got my pony!<br />
<br />
The code in the <a href="http://code.google.com/p/boto/source/detail?r=1482">boto subversion repo</a> has already been updated to work with the new Versioning and MFADelete features. A new release will be out in the near future. I have included a link below to a unit test script that shows most of the basic operations and should give you a good start on incorporating these great new features into your application. The script prompts for you for the serial number of your MFA device once and then prompts for a new MFA code each time on is required. You can only perform one operation with each code so you will have to wait for the device to cycle to the next code between each operation.<br />
<br />
<a href="http://code.google.com/p/boto/source/browse/trunk/boto/tests/test_s3versioning.py">Example Code</a>Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com1tag:blogger.com,1999:blog-1231480044619721857.post-32126945749344088412009-12-21T11:07:00.002-05:002009-12-21T15:41:18.004-05:00Boto 1.9a releasedHi -<br />
<br />
I have just uploaded a new version of boto to the downloads section at <a href="http://boto.googlecode.com/">http://boto.googlecode.com/</a>. Version 1.9a is a significant and long overdue release that includes, among other things:<br />
<ul><li>Support for Virtual Private Cloud (VPC)</li>
<li>Support for Relational Data Service (RDS)</li>
<li>Support for Shared EBS Snapshots</li>
<li>Support for Boot From EBS</li>
<li>Support for Spot Instances</li>
<li>CloudFront private and streaming Distributions</li>
<li>Use of POST in data-heavy requests in ec2 and sdb modules</li>
<li>Support for new us-west-1 region</li>
<li>Fixes for more than 25 issues</li>
</ul>Other than bug fixes and support for any new services, the bulk of the development effort from now on will focus on the boto 2.0 release. This will be a significant new release with some major changes and exciting new features. <br />
<br />
MitchMitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com8tag:blogger.com,1999:blog-1231480044619721857.post-29223228825866483942009-12-16T09:55:00.004-05:002009-12-16T10:01:53.526-05:00Private and Streaming Distributions in CloudFrontBoto has supported the CloudFront content delivery service since it's initial launch in November of 2008. CloudFront has recently launched a couple of great new features:<br />
<ul><li>Distributing Private Content</li>
<li>Streaming Distributions</li>
</ul><p>While adding support for these features to boto, I also took the opportunity to (hopefully) improve the overall boto support for CloudFront. In this article, I'll take a quick tour of the new CloudFront features and in the process cover the improved support for CloudFront in boto.</p><p>First, a little refresher. The main abstraction in CloudFront is a Distribution and in CloudFront all Distributions are backed by an S3 bucket, referred to as the Origin. Until recently, all content distributed by CloudFront had to be public content because there was no mechanism to control access to the content.</p><p>To create a new Distribution for public content, let's assume that we already have an S3 bucket called <code>my-origin</code> that we want to use as the Origin:</p><div><pre><code>
>>> import boto
>>> c = boto.connect_cloudfront()
>>> d = c.create_distribution(origin='my-origin.s3.amazonaws.com', enabled=True, caller_reference='My Distribution')
>>> d.domain_name
d33unmref5340o.cloudfront.net
</code></pre></div><p>So, <code>d</code> now points to my new CloudFront Distribution, backed by my S3 bucket called <code>my-origin</code>. Boto makes it easy to add content objects to my new Distribution. For example, let's assume that I have a JPEG image on my local computer that I want to place in my new Distribution:</p><div><pre><code>
>>> fp = open('/home/mitch/mycoolimage.jpg')
>>> obj = d.add_object('mycoolimage.jpg', fp)
>>>
</code></pre></div><p>Not only does the <code>add_object</code> method copy the content to the correct S3 bucket, it also makes sure the S3 ACL is set correctly for the type of Distribution. In this case, since it is a public Distribution the content object will be publicly readable.</p><p>You can also list all objects currently in the Distribution (or rather it's underlying bucket) by calling the <code>get_objects</code> method and you can also get the CloudFront URL for any object by using it's <code>url</code> method:</p><div><pre><code>
>>> d.get_objects()
[<Object: my-origin.s3.amazonaws.com/mycoolimage.png>] >>> obj.url() http://d33unmref5340o.cloudfront.net/mycoolimage.jpg </code></pre></div><h3>Don't Cross the Streams</h3><p>The recently announced streaming feature of CloudFront will be of interest to anyone that needs to server audio or video. The nice thing about streaming is that only the content that the user actually watches or listens to is downloaded so if you have users with short attention spans, you can potentially save a lot of bandwidth costs. Plus, the streaming protocols support the ability to serve different quality media based on the user's available bandwidth.</p><p>To take advantage of these cool features, all you have to do is store streamable media files (e.g. FLV, MP3, MP4) in your origin bucket and then CloudFront will make those files available via RTMP, RTMPT, RTMPE or RTMPTE protocol using Adobe's Flash Media Server (see the <a href="http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/RTMPStreaming.html">CloudFront Developer's Guide</a> for details).</p><p>The process for creating a new Streaming Distribution is almost identical to the above process. </p><div><pre><code>
>>> sd = c.create_streaming_distribution('my-origin.s3.amazonaws.com', True, 'My Streaming Distribution')
>>> fp = open('/home/mitch/embarrassingvideo.flv')
>>> strmobj = sd.add_object('embarrassingvideo.flv', fp)
>>> strmobj.url()
u'rtmp://sj6oeasqgt12x.cloudfront.net/cfx/st/embarrassingvideo.flv'
</code></pre></div><p>Note that the <code>url</code> method still returns the correct URL to embed in your media player to access the streaming content.</p><h3>My Own Private Idaho</h3><p>Another new feature in CloudFront is the ability to distribute private content across the CloudFront content delivery network. This is really a two-part process:</p><ul><li>Secure the content in S3 so only you and CloudFront have access to it</li>
<li>Create signed URL's pointing to the secure content that can be distributed to whoever you want to be able to access the content</li>
</ul><p>I'm only going to cover the first part of the process here. The <a href="http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html">CloudFront Developer's Guide</a> provides detailed instructions for creating the signed URL's. Eventually, I'd like to be able to create the signed URL's directly in boto but doing so requires some non-standard Python libraries to handle the RSA-SHA1 signing and that is something I try to avoid in boto.</p><p>Let's say that we want to take the public Distribution I created above and turn it into a private Distribution. The first thing we need to do is create an Origin Access Identity (OAI). The OAI is a kind of virtual AWS account. By granting the OAI (and only the OAI) read access to your private content it allows you to keep the content private but allow the CloudFront service to access it.</p><p>Let's create a new Origin Access Identity and associate it with our Distribution:</p><div><pre><code>
>>> oai = c.create_origin_access_identity('my_oai', 'An OAI for testing')
>>> d.update(origin_access_identity=oai)
</code></pre></div><p>If there is an Origin Access Identity associated with a Distribution then the <code>add_object</code> method will ensure that the ACL for any objects added to the distribution is set so that the OAI has READ access to the object. In addition, by default it will also configure the ACL so that all other grants are removed so only the owner and the OAI have access. You can override this behavior by passing <code>replace=False</code> to the <code>add_object</code> call.</p><p>Finally, boto makes it easy to add trusted signers to your private Distribution. A trusted signer is another AWS account that has been authorized to create signed URL's for your private Distribution. To enable another AWS account, you need that accounts AWS Account ID (see <a href="http://www.elastician.com/2009/06/managing-your-aws-credentials-part-1.html">this</a> for an explanation about the Account ID).</p><div><pre><code>
>>> from boto.cloudfront.signers import TrustedSigners
>>> ts = TrustedSigners()
>>> ts.append('084307701560')
>>> d.update(trusted_signers=ts)
</code></pre></div><p>As I said earlier, I'm not going to go into the process of actually creating the signed URL's in this blog post. The CloudFront docs do a good job of explaining this and until I come up with a way to support the signing process in boto, I don't really have anything to add.</p>Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com4tag:blogger.com,1999:blog-1231480044619721857.post-4290604449581920412009-12-10T17:54:00.007-05:002012-11-13T11:55:04.554-05:00Comprehensive List of AWS Endpoints<i>Note: AWS has now started their own list of API endpoints <a href="http://developer.amazonwebservices.com/connect/entry.jspa?externalID=3912">here</a>. You may want to begin using that list as the definitive reference.<br />
</i><br />
<i><br />
</i><br />
<i>Another Note: I am now collecting and publishing this information as <a href="https://github.com/garnaat/missingcloud">JSON data</a>. I am generating the HTML below from this JSON data.</i><br />
<i><br />
</i><br />
Guy Rosen (@guyro on Twitter) recently asked about a comprehensive list of AWS service endpoints. This information is notoriously difficult to find and seems to be spread across many different documents, release notes, etc. Fortunately, I had most of this information already gathered together in the <a href="http://boto.googlecode.com/">boto</a> source code so I pulled that together and hunted down the stragglers and put this list together.<br />
<br />
If you have any more information to provide or have corrections, etc. please comment below. I'll try to keep this up to date over time.<br />
<br />
<b>Auto Scaling</b>
<ul>
<li>us-east-1: autoscaling.us-east-1.amazonaws.com</li>
<li>us-west-1: autoscaling.us-west-1.amazonaws.com</li>
<li>us-west-2: autoscaling.us-west-2.amazonaws.com</li>
<li>sa-east-1: autoscaling.sa-east-1.amazonaws.com</li>
<li>eu-west-1: autoscaling.eu-west-1.amazonaws.com</li>
<li>ap-southeast-1: autoscaling.ap-southeast-1.amazonaws.com</li>
<li>ap-southeast-2: autoscaling.ap-southeast-2.amazonaws.com</li>
<li>ap-northeast-1: autoscaling.ap-northeast-1.amazonaws.com</li>
</ul>
<b>CloudFormation</b>
<ul>
<li>us-east-1: cloudformation.us-east-1.amazonaws.com</li>
<li>us-west-1: cloudformation.us-west-1.amazonaws.com</li>
<li>us-west-2: cloudformation.us-west-2.amazonaws.com</li>
<li>sa-east-1: cloudformation.sa-east-1.amazonaws.com</li>
<li>eu-west-1: cloudformation.eu-west-1.amazonaws.com</li>
<li>ap-southeast-1: cloudformation.ap-southeast-1.amazonaws.com</li>
<li>ap-southeast-2: cloudformation.ap-southeast-2.amazonaws.com</li>
<li>ap-northeast-1: cloudformation.ap-northeast-1.amazonaws.com</li>
</ul>
<b>CloudFront</b>
<ul>
<li>universal: cloudfront.amazonaws.com</li>
</ul>
<b>CloudSearch</b>
<ul>
<li>us-east-1: cloudsearch.us-east-1.amazonaws.com</li>
</ul>
<b>CloudWatch</b>
<ul>
<li>us-east-1: monitoring.us-east-1.amazonaws.com</li>
<li>us-west-1: monitoring.us-west-1.amazonaws.com</li>
<li>us-west-2: monitoring.us-west-2.amazonaws.com</li>
<li>sa-east-1: monitoring.sa-east-1.amazonaws.com</li>
<li>eu-west-1: monitoring.eu-west-1.amazonaws.com</li>
<li>ap-southeast-1: monitoring.ap-southeast-1.amazonaws.com</li>
<li>ap-southeast-2: monitoring.ap-southeast-2.amazonaws.com</li>
<li>ap-northeast-1: monitoring.ap-northeast-1.amazonaws.com</li>
</ul>
<b>DevPay</b>
<ul>
<li>universal: ls.amazonaws.com</li>
</ul>
<b>DynamoDB</b>
<ul>
<li>us-east-1: dynamodb.us-east-1.amazonaws.com</li>
<li>us-west-1: dynamodb.us-west-1.amazonaws.com</li>
<li>us-west-2: dynamodb.us-west-2.amazonaws.com</li>
<li>ap-northeast-1: dynamodb.ap-northeast-1.amazonaws.com</li>
<li>ap-southeast-1: dynamodb.ap-southeast-1.amazonaws.com</li>
<li>ap-southeast-2: dynamodb.ap-southeast-2.amazonaws.com</li>
<li>eu-west-1: dynamodb.eu-west-1.amazonaws.com</li>
</ul>
<b>ElastiCache</b>
<ul>
<li>us-east-1: elasticache.us-east-1.amazonaws.com</li>
<li>us-west-1: elasticache.us-west-1.amazonaws.com</li>
<li>us-west-2: elasticache.us-west-2.amazonaws.com</li>
<li>sa-east-1: elasticache.sa-east-1.amazonaws.com</li>
<li>eu-west-1: elasticache.eu-west-1.amazonaws.com</li>
<li>ap-southeast-1: elasticache.ap-southeast-1.amazonaws.com</li>
<li>ap-northeast-1: elasticache.ap-northeast-1.amazonaws.com</li>
</ul>
<b>Elastic Beanstalk</b>
<ul>
<li>us-east-1: elasticbeanstalk.us-east-1.amazonaws.com</li>
<li>us-west-1: elasticbeanstalk.us-west-1.amazonaws.com</li>
<li>us-west-2: elasticbeanstalk.us-west-2.amazonaws.com</li>
<li>ap-northeast-1: elasticbeanstalk.ap-northeast-1.amazonaws.com</li>
<li>ap-southeast-1: elasticbeanstalk.ap-southeast-1.amazonaws.com</li>
<li>ap-southeast-2: elasticbeanstalk.ap-southeast-2.amazonaws.com</li>
<li>eu-west-1: elasticbeanstalk.eu-west-1.amazonaws.com</li>
</ul>
<b>Elastic Compute Cloud</b>
<ul>
<li>us-east-1: ec2.us-east-1.amazonaws.com</li>
<li>us-west-1: ec2.us-west-1.amazonaws.com</li>
<li>us-west-2: ec2.us-west-2.amazonaws.com</li>
<li>sa-east-1: ec2.sa-east-1.amazonaws.com</li>
<li>eu-west-1: ec2.eu-west-1.amazonaws.com</li>
<li>ap-southeast-1: ec2.ap-southeast-1.amazonaws.com</li>
<li>ap-southeast-2: ec2.ap-southeast-2.amazonaws.com</li>
<li>ap-northeast-1: ec2.ap-northeast-1.amazonaws.com</li>
</ul>
<b>Elastic Load Balancing</b>
<ul>
<li>us-east-1: elasticloadbalancing.us-east-1.amazonaws.com</li>
<li>us-west-1: elasticloadbalancing.us-west-1.amazonaws.com</li>
<li>us-west-2: elasticloadbalancing.us-west-2.amazonaws.com</li>
<li>sa-east-1: elasticloadbalancing.sa-east-1.amazonaws.com</li>
<li>eu-west-1: elasticloadbalancing.eu-west-1.amazonaws.com</li>
<li>ap-southeast-1: elasticloadbalancing.ap-southeast-1.amazonaws.com</li>
<li>ap-southeast-2: elasticloadbalancing.ap-southeast-2.amazonaws.com</li>
<li>ap-northeast-1: elasticloadbalancing.ap-northeast-1.amazonaws.com</li>
</ul>
<b>Elastic Map Reduce</b>
<ul>
<li>us-east-1: elasticmapreduce.us-east-1.amazonaws.com</li>
<li>us-west-1: elasticmapreduce.us-west-1.amazonaws.com</li>
<li>us-west-2: elasticmapreduce.us-west-2.amazonaws.com</li>
<li>sa-east-1: elasticmapreduce.sa-east-1.amazonaws.com</li>
<li>eu-west-1: elasticmapreduce.eu-west-1.amazonaws.com</li>
<li>ap-southeast-1: elasticmapreduce.ap-southeast-1.amazonaws.com</li>
<li>ap-southeast-2: elasticmapreduce.ap-southeast-2.amazonaws.com</li>
<li>ap-northeast-1: elasticmapreduce.ap-northeast-1.amazonaws.com</li>
</ul>
<b>Flexible Payment Service</b>
<ul>
<li>sandbox: authorize.payments-sandbox.amazon.com/cobranded-ui/actions/start</li>
<li>production: authorize.payments.amazon.com/cobranded-ui/actions/start</li>
<li>sandbox: fps.sandbox.amazonaws.com</li>
<li>production: fps.amazonaws.com</li>
</ul>
<b>Glacier</b>
<ul>
<li>us-east-1: glacier.us-east-1.amazonaws.com</li>
<li>us-west-1: glacier.us-west-1.amazonaws.com</li>
<li>us-west-2: glacier.us-west-2.amazonaws.com</li>
<li>eu-west-1: glacier.eu-west-1.amazonaws.com</li>
<li>ap-northeast-1: glacier.ap-northeast-1.amazonaws.com</li>
</ul>
<b>Identity & Access Management</b>
<ul>
<li>universal: iam.amazonaws.com</li>
</ul>
<b>Import/Export</b>
<ul>
<li>universal: importexport.amazonaws.com</li>
</ul>
<b>Mechanical Turk</b>
<ul>
<li>universal: mechanicalturk.amazonaws.com</li>
</ul>
<b>Relational Data Service</b>
<ul>
<li>us-east-1: rds.us-east-1.amazonaws.com</li>
<li>us-west-1: rds.us-west-1.amazonaws.com</li>
<li>us-west-2: rds.us-west-2.amazonaws.com</li>
<li>sa-east-1: rds.sa-east-1.amazonaws.com</li>
<li>eu-west-1: rds.eu-west-1.amazonaws.com</li>
<li>ap-southeast-1: rds.ap-southeast-1.amazonaws.com</li>
<li>ap-southeast-2: rds.ap-southeast-2.amazonaws.com</li>
<li>ap-northeast-1: rds.ap-northeast-1.amazonaws.com</li>
</ul>
<b>Route 53</b>
<ul>
<li>universal: route53.amazonaws.com</li>
</ul>
<b>Security Token Service</b>
<ul>
<li>universal: sts.amazonaws.com</li>
</ul>
<b>Simple Email Service</b>
<ul>
<li>us-east-1: email.us-east-1.amazonaws.com</li>
</ul>
<b>Simple Notification Service</b>
<ul>
<li>us-east-1: sns.us-east-1.amazonaws.com</li>
<li>us-west-1: sns.us-west-1.amazonaws.com</li>
<li>us-west-2: sns.us-west-2.amazonaws.com</li>
<li>sa-east-1: sns.sa-east-1.amazonaws.com</li>
<li>eu-west-1: sns.eu-west-1.amazonaws.com</li>
<li>ap-southeast-1: sns.ap-southeast-1.amazonaws.com</li>
<li>ap-southeast-2: sns.ap-southeast-2.amazonaws.com</li>
<li>ap-northeast-1: sns.ap-northeast-1.amazonaws.com</li>
</ul>
<b>Simple Queue Service</b>
<ul>
<li>us-east-1: sqs.us-east-1.amazonaws.com</li>
<li>us-west-1: sqs.us-west-1.amazonaws.com</li>
<li>us-west-2: sqs.us-west-2.amazonaws.com</li>
<li>sa-east-1: sqs.sa-east-1.amazonaws.com</li>
<li>eu-west-1: sqs.eu-west-1.amazonaws.com</li>
<li>ap-southeast-1: sqs.ap-southeast-1.amazonaws.com</li>
<li>ap-southeast-2: sqs.ap-southeast-2.amazonaws.com</li>
<li>ap-northeast-1: sqs.ap-northeast-1.amazonaws.com</li>
</ul>
<b>Simple Storage Service</b>
<ul>
<li>: s3.amazonaws.com</li>
<li>us-west-1: s3-us-west-1.amazonaws.com</li>
<li>us-west-2: s3-us-west-2.amazonaws.com</li>
<li>sa-east-1: s3.sa-east-1.amazonaws.com</li>
<li>eu-west-1: s3-eu-west-1.amazonaws.com</li>
<li>ap-southeast-1: s3-ap-southeast-1.amazonaws.com</li>
<li>ap-southeast-2: s3-ap-southeast-2.amazonaws.com</li>
<li>ap-northeast-1: s3-ap-northeast-1.amazonaws.com</li>
</ul>
<b>Simple Worflow</b>
<ul>
<li>us-east-1: swf.us-east-1.amazonaws.com</li>
</ul>
<b>SimpleDB</b>
<ul>
<li>us-east-1: sdb.amazonaws.com</li>
<li>us-west-1: sdb.us-west-1.amazonaws.com</li>
<li>us-west-2: sdb.us-west-2.amazonaws.com</li>
<li>sa-east-1: sdb.sa-east-1.amazonaws.com</li>
<li>eu-west-1: sdb.eu-west-1.amazonaws.com</li>
<li>ap-southeast-1: sdb.ap-southeast-1.amazonaws.com</li>
<li>ap-southeast-2: sdb.ap-southeast-2.amazonaws.com</li>
<li>ap-northeast-1: sdb.ap-northeast-1.amazonaws.com</li>
</ul>
<b>Storage Gateway</b>
<ul>
<li>us-east-1: storagegateway.us-east-1.amazonaws.com</li>
<li>us-west-1: storagegateway.us-west-1.amazonaws.com</li>
<li>us-west-2: storagegateway.us-west-2.amazonaws.com</li>
<li>sa-east-1: storagegateway.sa-east-1.amazonaws.com</li>
<li>eu-west-1: storagegateway.eu-west-1.amazonaws.com</li>
<li>ap-southeast-1: storagegateway.ap-southeast-1.amazonaws.com</li>
<li>ap-southeast-2: storagegateway.ap-southeast-2.amazonaws.com</li>
<li>ap-northeast-1: storagegateway.ap-northeast-1.amazonaws.com</li>
</ul>
<b>Virtual Private Cloud</b>
<ul>
<li>us-east-1: ec2.us-east-1.amazonaws.com</li>
<li>us-west-1: ec2.us-west-1.amazonaws.com</li>
<li>us-west-2: ec2.us-west-2.amazonaws.com</li>
<li>sa-east-1: vpc.sa-east-1.amazonaws.com</li>
<li>eu-west-1: ec2.eu-west-1.amazonaws.com</li>
<li>ap-southeast-1: ec2.ap-southeast-1.amazonaws.com</li>
<li>ap-southeast-2: ec2.ap-southeast-2.amazonaws.com</li>
<li>ap-northeast-1: ec2.ap-northeast-1.amazonaws.com</li>
</ul>
Mitch Garnaathttp://www.blogger.com/profile/02589240083555476561noreply@blogger.com6