Developing a Multi-Pod Data Center with Cisco ACI
At HYPOPORT, what we do is complex, but our company has a simple purpose: to create excellent experiences for our customers’ customers. We’re a multifaceted software company with multiple areas of focus.
HYPOPORT is made up of several autonomous subsidiaries. We are, in essence, a network of companies creating solutions that fall under the following services: credit, private clients, real estate, and insurance. Some companies within our network include EUROPACE AG, Fio Systems AG, Qualitypool GmbH, Dr. Klein, Finmas GmbH, and Smart InsureTech AG.
Our platforms simplify the online experience whether our customers are using our software to power their end-user platforms or give their customers access to information and services through an online portal.
The Ultimate Challenge: A Data Center Overhaul 15 Years in the Making
I joined HYPOPORT Systems, a service company within the HYPOPORT Group, last summer to revamp the company’s data center. It’s only myself and one colleague who run and maintain the IT network and infrastructure for the whole company. We run the data centers and the network infrastructure at our sites and our headquarters. There is a small group of specialists in Linux, VMWare, and SQL too as a part of our team. We also give our organizations the ability to create virtual machines in our data centers. It’s our team's job to keep our IT systems running smoothly.
My first order of business as security network architect was to redesign and deploy new data centers for the organization. Our existing data center had some issues. As our company grew and evolved over the past 15 years, changes were made to our systems.
The data center and the network infrastructure had become somewhat of a patchwork of different pieces being added or changed over time. It had gotten to a point where we didn’t have the insights to properly troubleshoot our problems. No one had access to a complete overview of the data center.
This was problematic because our clients’ customers were sometimes disconnected from servers. Colleagues were nervous to make improvements to the data center in case they accidentally “broke” something and wouldn’t be able to figure out how to fix it.
But my arrival at the company was timed with a fantastic opportunity for change: HYPOPORT was moving our headquarters out of Berlin, Germany, and relocating to Lübeck in northern Germany. The Berlin part of the company stays in Berlin but is moving to a new taller building without datacenter capabilities. This was a big chance to completely overhaul our systems and infrastructure, and to essentially start over. We couldn’t take the data center with us, so we had to build a new multi-site data center with test and staging possibilities integrated.
In addition to cohesion throughout the new system, security was our main priority in building the multi-site data center. We needed to ensure that our servers, and our clients who use our services, were individually secured with filtering between the networks and different sites.
When it came time to build, we looked at options like VMware’s software-defined networking solution, but it wasn’t the right fit. Our environment requires different physical connection, and we couldn’t do software-defined networking on a digital distributed virtual switch in a VMware ESXi server. After looking elsewhere in the market, we decided the best solution for our needs was .
Ready, Set, Deploy: Choosing Cisco ACI
My colleagues had heard about Cisco ACI through our partner . I was brought in to build and deploy the solution because I’ve designed network infrastructures for years and I’ve used Cisco since 1996. While I hadn’t used Cisco ACI before this instance, I knew Cisco solutions were easy to learn, so I’d be able to get up to speed quickly. Cisco ACI infrastructure is easy to implement because of the way Cisco pre-configures the solution—much of the guesswork is removed from the deployment.
A key goal in implementing Cisco ACI was to ensure that we had established an active backup data center environment while reducing our associated costs. We’ve also enabled Layer 3 networking in our environment because we want to use both data centers independent from each other while also allowing them to converge.
If the service running data center A fails, then the same service also runs in data center B. We run this alongside a few Docker containers and some high-availability proxies that share the load of both data centers. This removes much of our anxiety around system failures, because if one data center fails, the services and other data centers can continue to run without requiring us to mediate.
Adding Layers of Security and Failsafes
Between our data centers, we also have multiple dark fiber connections. Layer 2 isn’t a strong load-balancing protocol; it’s failover technology, meaning you need to have real load balance in your network. This was yet another reason to implement Layer 3 as the underlay network, because then we could fully utilize the dark fiber. Currently, we use our dark fiber connections at 10GB/s, but it also allows us to expand up to 40GB/s in the future, as needed.
Adding the DMZ has been a major improvement in our security. It separates our network into 16 different zones, each further separated by a firewall for extra security. Every zone is stretched over both data centers with its own Layer 3 subnets in both data centers.
For example, we've got Cisco ISE running for our dot1x. We also have two Cisco ISE appliances: One runs in data center A and one in data center B. We want them to talk to each other for failover and clustering. But, we also want every client to be able to reach both data centers. If they talk to each other, it doesn't have to go over the firewall. But if a client wants to reach the ISE, then their traffic has to go over the firewall; it doesn't matter which ISE they want to reach.
We also use Cisco Firepower 4110 Series security appliances in both of our data centers, which gives different routing paths for the environment. Here, it becomes a little bit more complicated. But, despite the additional layer of complexity, it’s much easier to configure, make changes, and use on an everyday basis. And we also have to use virtual routing and forwarding (VRF) on the Cisco Catalyst 9500—that's our DMZ cores.
Getting Results: Tracking Packets, Tightening Security, and Streamlined Processes
Now that everything’s changed within our data centers, all of our routes are known. One of the biggest issues with the old infrastructure was that every route or anything we wanted to reach for a client was running over a default route.
We never have to worry about that again. Now, we have only two default routes to the internet's firewall, and any network on the VPN or within a data center, Cisco ACI tenant, or on one of our sites, is known in the routing table. You can see, every time, where your packet is going and why.
That visibility is vital to maintaining and securing the system—especially compared to how it was before. In Cisco ACI, every company has its own tenant, and can even have a multitenant setup. In each tenant, they can separate the services their application provides. Each tenant has to consider why certain servers are speaking with another server and whether they want that to happen. It would also be impossible for servers from other companies to reach the servers they didn’t own—something that is crucial for our needs. Again, this is where the DMZ is an additional layer of security.
The biggest benefit of switching to Cisco ACI, by far, is that we can confidently identify and state where the traffic is going (and explain why), as well as monitor it in real time.
A Stable IT Infrastructure, Now and Into the Future
As a result of making these sweeping changes by transitioning to Cisco ACI, we’ve been able to provide better internal services to our employees, our customers, and our customers’ customers.
We now have a stable infrastructure that’s easier to manage. We’ll be moving to our new office in February and, in front of that, we’ll move our virtual machines and physical services to our new datacenters. When our full migration is complete, we’ll have more than 20 sites connected to the data center.
Implementing and deploying Cisco ACI has been a major paradigm shift for how we do business, how we run our operations, and how our customers and their customers experience our services. Our data centers aren’t thought about by most of our end users, but its presence is felt when things go wrong. We’re not all the way through our transition yet—we still have a ways to go. But once we’ve completed our migration and our new office is up and running, hopefully our behind the scenes work will stay behind the scenes.