What's That Noise?! [Ian Kallen's Weblog]

Main | Next month (Feb 2011) »

20110120 Thursday January 20, 2011

KUSF: A village on the airwaves burned down

As some readers may know, I founded Rampage Radio with the guidance and support of Howie Klein back in 1982. I only stuck around for a few years and thereafter left it in Ron Quintana's able hands. But those were years with impact, I look back at them fondly and the show has been running on the air ever since, the last broadcast was in it's usual time slot last Saturday night. As someone who grew up in San Francisco, I always felt that KUSF's presence at 90.3 was a comforting constant. Apparently a deal to sell off KUSF's frequency was consummated last week and the signal was abruptly shutoff Tuesday morning. A rally and a dialog took place last night at Presentation Theater with USF President Father Stephen Privett. I commend Father Privett for coming out to face the music, all 500 or so of us in the packed theater were upset by these events and I think it took a degree of courage to show up. However, after the two hour question and answer sessions, it became clear to me that Father Privett has suffered a third degree failure.

First, the outcome was poor; the students who he claimed to be acting on behalf of will have reduced volunteer support, the revenue (purported to benefit students) wasn't subject to a competitive bid (it was the first and only deal under discussion); just an NDA-cloaked back-room agreement. Aside from poorly serving the students, his notion of the University as an island, that serving the broader community is detrimental to serving the students, is fundamentally flawed. Serving the community and accepting the efforts of volunteers benefits both the students and the broader community.

Second, the process was terrible; instead of backing up and reaching out to the array of interested parties that a deal discussion might commence, he signed the non-disclosure agreement and completely shut out the faculty, students and community. Instead of embracing the stakeholders and providing some transparency, he went straight to the NDA and ambushed them.

And the third degree failure was the cowardly absence of recognition of the first two failures.

Father Privett claimed full responsibility, explained his rationale for what he did and the process he followed but his rationale for the process was weak. Before going under the cover of NDA, he should have reached out to the students, faculty and volunteers to say: before this goes away, give me some alternatives that will serve you better. Father Privett's gross incompetence was saddening, he should just resign. In the meantime, using another frequency as a fall back for a rejected FCC petition makes sense but there'll always be this sense of a vacated place in our hearts for 90.3 as San Francisco's cultural oasis.

I'm certainly hoping that KUSF can reemerge from the ashes. Please join the effort on Facebook to Save KUSF!

( Jan 20 2011, 10:27:58 AM PST ) Permalink


20110116 Sunday January 16, 2011

Managing Granular Access Controls for S3 Buckets with Boto

Storing backups is a long-standing S3 use case but, until the release of IAM the only way to use S3 for backups was to use the same credentials you use for everything else (launching EC2 instances, deploying artifacts from S3, etc). Now with IAM, we can create users with individual credentials, create user groups, create access policies and assign policies to users and groups. Here's how we can use boto to create granular access controls for S3 backups.

So let's create a user and a group within our AWS account to handle backups. Start by pulling and installing the latest boto release from github. Let's say you have an S3 bucket called "backup-bucket" and you want to have a user whose rights within your AWS infrastructure is confined to putting, getting and deleting backup files from that bucket. This is what you need to do:

  1. Create a connection to the AWS IAM service:
    import boto
    iam = boto.connect_iam()
    
  2. Create a user that will be responsible for the backup storage. When the credentials are created, the access_key_id and access_key_id components of the response will be necessary for the user to use them, save those values:
    backup_user = iam.create_user('backup_user')
    backup_credentials = iam.create_access_key('backup_user')
    print backup_credentials['create_access_key_response']['create_access_key_result']['access_key']['access_key_id']
    print backup_credentials['create_access_key_response']['create_access_key_result']['access_key']['secret_access_key']
    
  3. Create a group that will be assigned permissions and put the user in that group:
    iam.create_group('backup_group')
    iam.add_user_to_group('backup_group', 'backup_user')
    
  4. Define a backup policy and assign it to the group:
    backup_policy_json="""{
      "Statement":[{
          "Action":["s3:DeleteObject",
            "s3:GetObject",
            "s3:PutObject"
          ],
          "Effect":"Allow",
          "Resource":"arn:aws:s3:::backup-bucket/*"
        }
      ]
    }"""
    created_backup_policy_resp = iam.put_group_policy('backup_group', 'S3BackupPolicy', backup_policy_json)
    
Permissions can be applied to individual users but my preference is to put users in a permitted group.

Note: the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables need to be set or they need to be provided as arguments to the boto.connect_iam() call (which wraps the boto.iam.IAMConnection ctor).

Altogether Now:

import boto
iam = boto.connect_iam()
backup_user = iam.create_user('backup_user')
backup_credentials = iam.create_access_key('backup_user')
print backup_credentials['create_access_key_response']['create_access_key_result']['access_key']['access_key_id']
print backup_credentials['create_access_key_response']['create_access_key_result']['access_key']['secret_access_key']
iam.create_group('backup_group')
iam.add_user_to_group('backup_group', 'backup_user')
backup_policy_json="""{
  "Statement":[{
      "Action":["s3:DeleteObject",
        "s3:GetObject",
        "s3:PutObject"
      ],
      "Effect":"Allow",
      "Resource":"arn:aws:s3:::backup-bucket/*"
    }
  ]
}"""
created_backup_policy_resp = iam.put_group_policy('backup_group', 'S3BackupPolicy', backup_policy_json)

While the command line tools for IAM are OK (Eric Hammond wrote a good post about them in Improving Security on EC2 With AWS Identity and Access Management (IAM)), I really prefer interacting with AWS programmatically over remembering all of the command line options for the tools that AWS distributes. I thought using boto and IAM was blog-worthy because I had a hard time discerning the correct policy JSON from the AWS docs. There's a vocabulary for the "Action" component and a format for the "Resource" component that wasn't readily apparent but after some digging around and trying some things, I arrived at the above policy incantation. Aside from my production server uses, I think IAM will make using 3rd party services that require AWS credentials to do desktop backups much more attractive; creating multiple AWS accounts and setting up cross-account permissions is a pain but handing over keys for all of your AWS resources to a third party is just untenable.

In the meantime, if you're using your master credentials for everything on AWS, stop. Adopt IAM instead! To read more about boto and IAM, check out

( Jan 16 2011, 03:09:15 PM PST ) Permalink