• 17Sep

    I recently got a rather disturbing hit on linkedin from a recruiter. I’m not posting names, but here’s the request:

    “Looking for a Cloud Stud”

    Hey There,

    Happy Tuesday! I hope this week is treating you well so far. I am reaching out to you in regards to a brand new position with one of my, not to pick any favorites but…..favorite clients. I know we don’t know each other yet so I apologize if I am wrong, but from what I can see you could be an excellent fit for it. Either way, I would like the chance to mutually expand our networks…but, I encourage you to take a minute to hear about this opportunity and I think you will be glad you did.

    I am working on a very specific, high level Cloud Architect position for my client, one of the world’s leading providers of remarketing services. This role will help guide the infrastructure platform architecture and design for their next generation, new wave technology initiatives. You will focus heavily on cloud platform selections, integration, and automation. For people who are looking to “make an impact” somewhere, you’ve found it. Period.

    We need an Architect who has experience with Azure, Amazon platform, and Saas or PaaS on Amazon Host services. This person MUST understand cloud offering applications that go through development and understand the gap between Operations and Engineering. So, combined experience with engineering, infrastructure, and development is crucial.

    Whether you are a career consultant or someone looking to come in and stay long term, our client is open to either. Basically, they need a Cloud Stud…all the nitty gritty details in between can be discussed and decided upon consideration. If this is not in your wheelhouse or not of interest to you, best of luck in the growth of your career and please reach out in the future. If you think you’re the one to take this on OR know someone who is, we should talk.

    What is the best time and number to contact you today? I look forward to speaking.

    A few things are typical: the very pushy sales “what time can I contact you today?” or the extra-friendly “I know we don’t know each other yet”

    What really gets me is the heavy “brogrammer” vibe. What gets me worse is that the recruiter (at least, the recruiter as presented in a polished linkedin profile) is an attractive young woman.

    In our industry, we see really big gaps in our representation by several demographics in the public at large: we see less women, latin, and black engineers. (I don’t have numbers. This is from personal observation.) While I disagree with some that we need to weight our hiring in favor of the minorities in our profession, I fully support efforts to widen the talent pool to include people from every background. Programs like Black Girls Code do a lot to encourage young people to consider a profession they may never consider as a possibility.

    The best way to fight trends like the “brogrammer” is for those of us in the privileged position (meaning, people who are in the default majority - in this case, male engineers) to make it known this is not acceptable. When you encounter people being… for lack of a better term… dicks, make them stop. Call them on it - in public if necessary, and be nice about it. Be firm, but nice.

    So, that being said, here’s my response to the recruiter:

    First, please accept my apology about connecting - I only connect on LinkedIn with people I’ve directly worked with or know personally.

    I’ve noticed a disturbing trend in the last few years of cowboy culture in the startup/internet industry. An increasing trend of sexism, elitism, and anti-inclusiveness that while may be refreshingly politically incorrect, is just… wrong.

    I don’t know that your client is actively trying to build a stable of talent that actively promotes this type of culture, but when taken to extremes, can directly damage the brand identity of the company and require heads to roll (see the recent firing of the CTO of Business Insider).

    Respectfully, an ad like this is not advertising an environment I’d like to work within. I’m pretty happy where I’m at anyway. I appreciate the opportunity, though.

    This really got my goat. I hope you feel the same.

  • 03Feb

    Rob Booth at Zenoss has been looking at extending and improving some of the Zenoss-specific shorcomings of the Zenoss barclamp I posted about a few days ago. He’s got a great blog post about the issues he’s running into with the barclamp and Crowbar in general. If you’re following the project, this is a great read to help understand the difficulties involved in infrastructure automation, and some thought process in how to overcome them.

    This barclamp wouldn’t have been possible without the help of both the Zenoss team and Matt Ray from Opscode. Thanks for everyone’s help in wrangling this project together. If you (yeah, you, the one reading this) are following along with tests in your own crowbar environment, send me some comments feedback.

    Tags:

  • 25Jan

    Recently I’ve been working with DTO Solutions and Zenoss to develop a barclamp for Crowbar to monitor your Crowbar installation with Zenoss instead of Nagios, if you’re into the ease and smooth goodness of a premium monitoring solution. (They didn’t pay me to say that. I just thought it was cool.)

    I did a screencast to demo the project, and the DTO guys were kind enough to host it on their Vimeo account. Damon’s waxed poetic about the Crowbar project and linked to the screencast as well. You should seriously check out that post.

    You can get the code for the barclamp and play with it at my github account.

  • 26Mar

    So if you ever have a need to set the hostname of a newly provisioned Red Hat style box from your reverse dns PTR record that you’ve assigned to that machine (ideally through DHCP) here you go:

    sed -i s/localhost.localdomain/`host \`ifconfig eth0 | grep 'inet addr:'| cut -d: -f2 | awk '{ print $1}'\` | awk '/pointer/{print $5}' | sed s/\.$//`/ /etc/sysconfig/network

    Swap out the eth0 for whichever port is your primary.

  • 17Mar

    So I’ve been doing a lot of contract consulting lately, which is about to wrap up. I’ve been working with and for some movers and shakers in the cloud world, including John Willis (a good buddy and all-around great guy) and Randy Bias (cloud guru extraordinaire). I’ve had a great opportunity to use and learn many private cloud tools. Eucalyptus, VMOps, OpenNebula, and a few others as well. I’m going to try and find some time to write some detailed info about what I’ve learned soon. Maybe this week.

    Most lately, I’ve done some Chef development for some recipes to deploy a local, private cloud on your own hardware using Open Nebula. Just got done with a successful demo at CloudConnect where I set up a two-node cloud system plus a controller in just under a half hour in front of an audience of about 100 people. Biggest demo of my life, and it was the end result of literally a month’s work. We’re releasing the recipes open-source once I polish them up a little more. I’d really like to add kvm support and true LWRP templates for VM deployment. More to come, please stay tuned.

  • 05Feb

    Sören Bleikertz has been poking around EC2 instances and found some nice ways of seeing what’s under the hood. Check it out at his blog.

  • 14Apr

    and I missed a lot.

    First: a confession. I’m a sporadic blogger at best, so you won’t see me posting early and often here.

    Meat: I missed manifestogate. I was following it via twitter (I’m @keithhudgins) that I caught from John Willis, and picked up Reuven Cohen who is, unbeknownst to me, one of the net-centric, non-corporate community organizers in the cloud world. This fiasco is what happens when corporate interests get involved in community efforts and find those efforts contrary to their goals. The Cloud Community Manifesto has some good goals behind it, but I’d rather see the businesses involved put some code and API’s where their wallets are. I’ll pontificate more about that in another post.

    Side: Ilya Grigorik had some thoughts about a nice analogy for cloud and new-style virtual resource platform architecture: the assembly line model. Really good stuff, you should take a look.

    Side: Google’s announced Java support for AppEngine. Kinda cool, this will get them some traction from the enterprise crowd. This also means, if you know anything about the JRuby stack (I don’t), you can run Rails apps at Google.

    Dessert: I just found Elastic Server by CohesiveFT. I’m impressed, we’ll be exploring this more in the future.

  • 24Mar

    Got this from Infoworld who got it from Microsoft at the Mix09 conference that Microsoft’s supporting PHP on Azure. Whoa. Supported open-source environment on Azure?!? They’re talking about their FastCGI environment running other stacks, too. Ruby was mentioned.

    My mind is officially blown.

  • 20Mar

    I just came across this blog post from Tim Bray, which gives some good insider-perspective on what Sun’s got building for a cloud offering. I’m intrigued:

    • It’s not a hosted-application-cloud, it’s a real, honest-to-goodness IT virtual datacenter cloud a-la Amazon EC2.
    • They’re developing an open api to control the thing. More on that later.
    • The API is so open, you can join the project.

    He’s also mentioned there’s a storage component, a computing component, powered by the Q-Layer technology that Sun acquired in January. Here’s a great YouTube clip of an interview with one of the Q-Layer principals. This is cool for the network admins in the crowd: a drag-and-drop browser based interface that allows you to build your virtual infrastructure graphically, similar to 3Tera.

    What’s most interesting here is that, according to Tim, Sun’s REALLY getting the point here: open designs, open APIs. Creative Commons license on the API. This allows other virtual infrastructure providers to use the API for portability, so that you can build a cotrtol interface to manage multiple cloud infrastructures. The point, according to Tim, is “Zero Barrier to Exit.” No one wants vendor lock-in as a customer. Amazon has been somewhat aggressive in protecting their API IP, in the one case that someone has white-boxed it: Eucalyptus. With a common API, the portability barriers diminish, so that you’ll find most cloud-based ‘mission critical’ infrastructures spanning different offerings. Vendor lock-in means only one company gets that slice of the whole pie, where open barriers mean that one customer will likely pick two or more providers to minimize points of failure. That’s a truly positive development that will help the industry as a whole. I just signed up, and I’m looking forward to seeing how I can contribute.

    Tags:

  • 11Mar

    Sometimes you need a system backup. Other times, you need to launch your box ten times. Maybe you’re working on your new web cluster and need to build an image for your web server role. There’s tons of reasons, but if you’re using Amazon EC2, there will come a time when you need a custom server image. The simplest way to boot an instance off a public AMI, make the changes you need, and then roll up a disk image and throw it up to S3. Here’s the easiest way:

    First, make sure you’ve got your /mnt partition built. Most of the public AMIs don’t launch with your big data drive formatted and available, and your disk image will be as big as the system itself. You may fill your drive image! Nothing hurts like a full drive. Nothing.

    Second, make sure you have the EC2 AMI Tools installed. If not, go ahead and get them on your box. You can download them here.

    I’m also assuming you have followed Amazon’s suggestions in setting up your shell profile so that you have your authentication to AWS in environment variables. Note: if you do this, don’t make your image public! Your credentials will then be in your system for everyone to see. Bad juju, so don’t do it. If you need to make your image public, then make sure you turn off your shell’s history, and type in the credentials on the command line.

    Okay, so here we go:

    ec2-bundle-vol -c $EC2_CERT -k $EC2_PRIVATE_KEY -u $AWS_ACCOUNT_ID -s 10240 -d /mnt
    ec2-upload-bundle -b yourbucket/yourimagename_`date +%Y-%m-%d_%H:%M:%S` -m /mnt/image.manifest.xml -a $AMAZON_ACCESS_KEY_ID -s $AMAZON_SECRET_ACCESS_KEY

    We’ll need a little explanation here. I’ll go step-by-step:

    ec2-bundle-image makes a series of files that, when pieced together, make a disk image. This overcomes the S3 5gig file limitation, and makes this thing easier to upload to S3. We tell it to make a 1Gig image (The biggest it allows) and throw the files in /mnt. If you’ve followed along, your /mnt partition should be big enough to hold all of this. There are other, advanced options that allow you to manage ramdisks, mount points, kernel images, and other things that you most likely won’t need. And if you do, read the docs. They’re pretty good.

    ec2-upload-bundle takes the manifest from your image.manifest.xml file that was created by ec2-bundle-image, and shoves it up to your S3 bucket named yourbucket and in a file named yourimagename_datestamp. I recommend changing those bucket and image names to something meaningful to you, and feel free to remove the timestamp if you don’t want it. It will loop through all the files in the manifest and get it all up to S3.

    S3, however, isn’t super reliable for these kinds of things. And ec2-upload-bundle is dumb enough to just crap out. All is not lost, however, as you’ll see which parts of your image that have been uploaded, and you’ll know the next one. Amazon was kind enough to add the --part flag to start uploading at any arbitrary part. So just run your command again, add –part 23 or whatever the next part is, and go from there.

    Once your image is up on S3, we need to tell EC2 it’s there so you can use it. The tool to do that is built into the API tools package, which you’ll need to launch your images anyway. Once you’ve got your environment set up right, you can just type:

    ec2-register yourbucket/yourimagename/image.manifest.xml

    Of course, you’ll need to change the names to the same as ran in the upload bundle command, but AWS will come back with an image ID that you then can use to launch your new private image.

« Previous Entries   

Recent Comments

  • Thanks, Rob. As I get deeper into this blog, I'd like to go ...
  • Thanks for the overviews. I just want to make it clear that...