• Insights

Drinking our own Kool-Aid: Moving to a Colo Facility

Eric Christiansen

4 min read

All Insights

I have been with Kraft Kennedy for over 15 years. Up until a couple of weeks ago, we had been housing our systems internally like many other companies. Consequently, we have seen our fair share of disasters and close calls over the years:

  • We lost our communication lines on September 11, 2001.
  • We lost power during the Blackout of 2003.
  • We lost building access for a week after the NYC steam pipe explosion of 2007 (it was so close to our building that it broke some of Pete Kennedy’s office windows).
  • We had a close call during Sandy. We didn’t lose power, but most people just a few blocks south of us in Manattan did for several days.

In addition, we’ve had to shut down systems due to our own air conditioning failures as well as building power shutdowns.

Considering all of the above, we finally decided to take our own advice that we regularly give to clients and move our primary systems to the IO New Jersey Data Center in Edison, New Jersey as part of Kraft Kennedy’s move to new offices.

Moving to a datacenter has allowed us to save room in the new office by letting us avoid building out a more complex server room. We’ll only need room for a single rack to house local switches and a single server to keep printing and desktop imaging local. It also significantly reduces the technical complexity of the move. For anyone working outside of our NY office, it’s business as usual. We told our users to take their laptops home with them as soon as their desks have been packed and to work from home or at a client site until the move is complete.

NY Server Room

Our old server room.  Lots of square footage we won’t need in the new office.

We followed the same process of moving to a datacenter that we recommend to our clients in just 5 easy steps:

  1. We purchased all new equipment (servers, SAN, switches and firewalls). Moving old equipment that’s currently in production is risky (don’t drop the SAN!) and involves a lot of down time.
  2. We use Dell Equallogic SANs and a VMware environment. We built and configured everything in our New York office and setup SAN replication between the current and new SAN while they were both together. Doing the initial seed over gigabit is a lot faster than going over a WAN connection.
  3. Once the build in the office was complete, we installed all of the new equipment in the datacenter in New Jersey.
  4. We then resumed nightly replication between the current infrastructure and the new infrastructure in the IO datacenter.
  5. When we were ready, we performed the final cutover on a Friday evening. With all of the hardware in place, this was all done remotely and was ultimately low-risk. We knew that if something went wrong, we could always just power up the original equipment. The final cutover from start to finish took about eight hours. We had core systems up in as little as three hours. We cheated a bit with email and ran that out of our Houston office (we have a multi-site Exchange 2010 DAG), so email was never unavailable.

For the actual setup in the IO datacenter, we were able to consolidate the three racks of equipment we had in our New York office down to a single rack that houses our production and research environements. The core of our production environment consists of three Dell R620 servers and two Dell Equallogic SANs. Our research environment is basically the same thing with just slightly older hardware (our old production equipment that we moved in after we cutover to the new hardware).

KK IO NJ

Our rack in IO Datacenter in NJ

As you can see, this rack is just about completely full. Any empty areas in the front of the rack have equipment mounted on the rear of the rack (switches/PDUs). We would not have been able to do this with most other datacenters due to power and cooling limitations on each cabinet. IO let us pull dual redundant 30A 208V circuits (four in total) into the single cabinet. By switching to 208V, we were able to put twice as much equipment on one circuit (Power = Volts x Amps). With its modular design and advanced cooling, IO can handle the amount of heat being generated. In fact, in the planning process, IO didn’t even ask us how much heat our equipment would generate. However, that being said, we probably wouldn’t load up a cabinet with this type of equipment again. Since most of these servers are VMware hosts with seven nics each, cable management proved to be very difficult. I would not trust remote hands to help us troubleshoot anything on the rear of the rack with the current setup.

Now that our servers are in a remote datacenter, our actual office move was quite easy. Our final office space won’t be ready for a few months, so we’ll be in temporary space in the new building. That doesn’t matter to us–all we need is an Internet connection and a firewall to setup a VPN link to the datacenter and we’re in business. As part of these upgrades, we also implemented Lync 2013 as our primary phone system and installed an Intelipeer SIP trunk for our voice traffic. This prevents us from having to deal with lengthy voice T1 installations since all voice calls use our Internet connection in the datacenter.

Kraft Kennedy has been talking about doing this for several years. We decided not to delay any longer, since colocation technology has advanced enough to significantly lower the cost of moving to a facility, making it quite feasible for smaller firms such as our own. We now have peace of mind from knowing that our core back-end systems are in a highly available facility.