Friday, October 10, 2014

Saving some cash - (Multi domain wildcard certificates)

Passion

I love to save money.  It's best when I save my own money, but saving my company money is great too.  It's part of why I love my job.  As a Develop Happiness Engineer, my job is to make developers lives easier, so they can be more productive, and so that we can hire fewer of them to do the same great work.  Really, my job boils down to increasing productivity, reducing server count, and keeping everything repeatable and audit-able.

Certificates

One of the necessary evil costs of running a professional website is encrypting traffic.  When you're a healthcare company, this becomes even more critical.  After all, this is pretty sensitive data.  But certificates are not free.  What's worse, I discovered yesterday that even the expensive ones only cover one level of sub-domains.

You see, I was reviewing our URL structure, and realized it was ugly.  We had things like myapp-staging.mydomain.com in one place, and things like staging-someapp.mydomain.com in others.  To make it worse, our route 53 hosted zone had become way too big, and it was getting hard to manage.  So I decided it was time to cleanup the hierarchy and use subdomains to provide some structure to our URLs.

The Problem

Well, we have a wildcard domain cert through a major certificate provider.  It applies to *.mydomain.com.  When I went to create www.develop.mydomain.com I discovered that our cert was not valid for it.  Yep, it only covers one level of subdomain.  So my grand plan was going to cost us a couple thousand dollars by the time I covered all 7+ subdomains we needed.  Lucky for me, only one of our domains is actually used by external customers.  Everything else is internal to our system, or accessed by our own employees.

Self-signed Certificates to the Rescue?

So that means we can use self-signed certs, right?  Well, yes, but how many of them?  And how to distribute them?  Yuk.  If only I could get ONE certificate that covered ALL of my domains and subdomains across ALL internal infrastructure.

A Better Way

Well, it turns out you can. You just need to create a certificate with Subject Alternative Names.  I found a blog post on self signed SAN certs that did a pretty good job.  I just had to clean it up a little and elaborate by adding wildcard SANs to it.  And without further ado, here is the process.

Creating a multi-domain wildcard certificate

Cautions

First, a quick caution.  When you include ANY SANs in a certificate, you must include ALL of them.  That is, the CN in your cert is ignored if you have even one SAN listed.  So be sure to include your base domain in your san list.

The Extensions File

You  must have a file that declares your certificate extensions.  You can do this in your openssl.cnf file, or you can do it (as I do) in a standalone file.

extensions.cnf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = mydomain.com
DNS.2 = *.mydomain.com
DNS.3 = myotherdomain.com
DNS.4 = *.myotherdomain.com
DNS.5 = *.mysubdomain.mydomain.com


Generate the certificate


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
# First create a key file (you'll need to enter your password 2X
openssl genrsa -des3 -out mykey.key 2048

# Next convert it to a passwordless key (remember that password?)
openssl rsa -in mykey.key -out mykey.rsa

# Now you need to create a signing request
# This requires you to enter a bunch of info
# I include:
# Country Name
# State
# City
# Organization Name
# Email Address
# But leave blank the OU, CN, and Password
openssl req -new -key mykey.rsa -out mykey.csr

# Finally, we generate the actual certificate
openssl x509 -req -extensions v3_req -days 365 -in mykey.csr -signkey mykey.rsa -out mykey.crt -extfile extensions.cnf

And that's it, you now have a certificate you can use to secure any domain you had listed in that extensions.cnf file.

 Good luck!

Monday, May 12, 2014

Musings on Chef

Background

This blog should have been written months ago, but I've been busy with a new job and laziness, and I've been distracted by Cloud Foundry.  Frankly, I've not really written much Chef code in 5 months now.  So, now that I'm feeling some Chef pain again, I'm back to looking for a way to make life better.  My frustrations largely center in a few areas of integration between cookbooks, as well as in the management of version-less components of Chef.
  1. Roles are not versioned, which means I can't change the behavior of a role in one environment without impacting others.
  2. Environments, nodes, and databags are not versioned.  
  3. Cross-cutting concerns are everywhere.  Ports, paths, logs, services, etc. I need a way for my cookbooks to communicate these cross-cutting concerns.
  4. Dependency management.  I like to keep my cookbook dependencies clean, but when you have dozens of cookbooks, that gets pretty challenging.  

The Details

Roles

Roles are the forgotten stepchild of Chef.  They were never really needed, and were never done right.  What's a Role?  Well, it's a collection of recipes, other roles, and attributes.  What's a cookbook?  It's a collection of recipes and attributes (plus some other stuff).  In other words, a Role is just a castrated cookbook.  They'd be useful if they hadn't been so leaned down as to remove the versioning.  With versions missing, there is no way to migrate a change in your roles from develop to staging to production.  So people resort to all sorts of odd behavior to allow this.  At my current employer, we have roles with name_<environment> type names.  It's a hack, and it's ugly, and it makes it really easy to mess something up.  Besides, a major reason for Chef is to ensure consistency between environments.  My answer, don't use roles.  Just use very simple cookbooks that wrap other cookbooks and provide the function of roles, while also providing versions.  

NOTE: Yes, I know that this prevents you from doing any searches on roles, and likely hides most of the runlist from doing searches for recipes too.  However, you really shouldn't be searching roles and recipes anyway.  They are the HOW of a node.  You should be searching the WHAT of a node.  What it does, or provides.  See the section on cross cutting concerns to see why I don't need to search for roles and recipes. 

Environments, Nodes, and Databags

Mostly this is a workflow issue as I really only care about having them in version control, not so much about having them version numbered.  Unlike Roles, you don't really move environments and nodes from dev to staging, etc.  Databags you could argue either way, and I honestly haven't used them enough to really comment.  I'm still working on a good solution to this, but for now I keep these items locally on my machine in a git repo.  Any changes require a commit before they are upload to the server.  Honestly, I wish there were a way to have the Chef server use a git repo for these items.

Cross-Cutting Concerns

 I hate dependencies, and try to limit the number of cookbooks that each of my cookbooks require.  Largely because I don't want to tell you how to do things.  Thus, my cookbook should tell you about the logs it creates, the services it provides, the ports it uses, etc.  However, my cookbook should not open ports, process logs, etc.  The challenge is a consistent language between cookbooks to ensure that my firewall cookbook can find all of my ports.  The folks at Infochimps created the silverware cookbook, which goes a long ways toward bringing an aspect oriented approach to cookbook development, but I found their cookbook to be pretty intense, and I had trouble trying to learn it.  So I'm starting development on my own Spicerack cookbook.  The intent is to provide a set of small libraries that will allow for the easy sharing of information between cookbooks.  Presently, I intend to support ports, services, endpoints, logfiles, and possibly directories.  The initial test/use case will be wiring together some web-services with a firewall cookbook of some sort, logstash, and kibana.

Dependency Management

As I already mentioned, I hate dependencies, and do my best to avoid them.  But I also believe in DRY compartmentalized code, so I have plenty of dependencies.  Spork from Etsy goes a long way to making version management of cookbooks easier.  It also makes managing the versions used by each environment easier.  What it doesn't do yet, nor does anything else I've found, is handle the dependencies between cookbooks.  When I update a library cookbook, I'd love a tool that would give me a list of all cookbooks in my library that depend on that updated cookbook.  I could then choose whether to update the dependency (and thus the version # for each of those cookbooks), or leave it alone.  
That said, I'm becoming increasingly convinced that this is a smell in my process.  I've become quite fond of semantic versioning and ~> constraints.  I'm hoping that the combination of these two will go a long way toward making dependency management easier.  Nonetheless, knowing which cookbooks need to use my update would be great.

So there you have it.  Coming soon, How Docker and Conf.d are Changing my Entire DevOps Approach.

Wednesday, February 26, 2014

Growing Pains (and Joys)

Well, I've been on the job for 2 days now, and I'm feeling the pain of new technologies.  Believe it or not, I just got my first smart phone on Monday.  I'm cheap, and have never been able to bring myself to pay more than $20 a month for cell coverage, so smartphones were out of the question.  I can say that learning to use an android phone has been a challenge, but a wonderful experience.  I feel more connected, and love the ability to access things when I'm away from home.  I'm sure I'll feel the leash that comes with it soon enough, but overall I'm thrilled to finally be carrying a modern phone.
My new laptop, however, is another story.  I got my first mac yesterday, a nice new macbook pro.  I will say that the size and battery life are a wonderful step up from the Dell I had when I worked at Lockheed.  Learning a new OS, however, is killing me.  I feel kinda clueless when I have to use Google or ask a friend just to figure out simple things like, where is my terminal?  How do install apps?  I don't care what they say, the idea that Apple products "just work" is laughable to me.  I'm still trying to find an email client that I'm happy with.  That said, it's been one day, so I'm sure that OS X will grow on me.  The sad thing is that I'm probably a month away from hating Windows and OS X.  I'll be one of those dual users who is always wishing for the features of the OS I'm not using at the moment.
The final challenge, which I will start today, is the new software stack.  CareKinesis is using a lot of tools that I've only read about.  I'm thrilled for the challenge of learning new software, and the joy of finding great ways to use it to make myself more effective, but I'm also very aware of how mentally straining it will be to learn 2 or 3 new tools per day for a few weeks.  Meanwhile, my 2 man team will be making some major decisions in the next few weeks about some additional tech we want to add to the stack.  There will be a lot of brain drain for a while, but it's going to be a fun ride.

Wednesday, February 5, 2014

A grand new adventure

On Friday the 7th of February, I will officially end my career at Lockheed Martin, after almost 8 years.  It has been a wild ride, but I'm glad to see it end.  perhaps in a week or two I'll write more on why I'm glad to leave, but for today I want to focus on what's next for me.  My grand new adventure.

On the 24th I'll start me new job at a small healthcare company, CareKinesis.  The company is focused on medication management, essentially helping manage the drug regimen for people who take lots of different pills on a daily basis.  We write software to handle the prescription process, have a call center, a pharmacy, etc.  My role will be as a "Developer Happiness Engineer", and it will be awesome.  As we roll things out, I plan to blog about the details, and keep all of you up to date.  I'm particularly excited about this company because they are so open and community friendly.  This new opportunity will let me contribute to open source more, blog more, and generally be involved in the software community, and I'm psyched about it.

What's a Developer Happiness Engineer?  Glad you asked.  The job is simple, do everything you can to ensure the development team is happy, and never has to do anything that isn't fun.  We automate builds, integration, testing, and deployment.  We then layer monitoring and alerting on top of that, with a super awesome dashboard to give you instant insight into all of the above.  The idea is that, eventually, a developer will be able to write new code, push a button, and 5 minutes later, their code will be on the production system.  They can then look at graphs that show the tests passing, the integration work succeeding, and various metrics of system performance, with lines indicating when their code hit the various systems.  They can look to see if their memory bug really got fixed, their compression scheme worked, or if their refresh times really went down.  We've got a long way to go to get to that point, but it will be a fun ride.  If you want to know more about what this sort of system looks like, take a look at codeascraft.com (Etsy's blog).  They're already doing most of this, and are basically the gold standard for it.

Anyhow, that's what's next for me, and I'm super excited to get started.