By Patrick McFadin, DataStax

When the gap between enterprise software development and IT operations was bridged nearly 15 years ago, building enterprise apps underwent a sea change. DevOps swept away the slow, manual processes and embraced the idea of ​​infrastructure as code. This was a change that increased the ability to rapidly scale and deliver reliable applications and services in production.

Building services in-house has been the status quo for a long time, but in a cloud-native world, the lines between cloud and on-prem have blurred. Cloud-based third-party services, powered by powerful open source software, are making it easier for developers to move faster. Their mandate is to focus on building with innovation and speed to compete in hyper-fast markets. For all application stakeholders, from the CIO to development teams, the path to simplicity, speed and risk reduction often involves cloud-based services that make data scalable and instantly available.

These views are not far apart and exist in many established organizations we work with. Yet they can be at odds with each other. In fact, we’ve often seen them work in counterproductive ways, to the extent that they slow down application development.

There may be compelling reasons to take everything in-house, but end users vote with execution. Here we will examine the point of view of each group and try to understand the motivations of each. It’s not a zero-sum game and the real answer may be the right combination of the two.

Construction services

Infrastructure engineers build the car. They are the ones who stay up late, take care of the shabby infrastructure and keep the lights on in the company. Adam Jacob (the co-founder and former CTO of Chef Software) famously said, “It’s the operations people’s job to keep the developers’ crap code out of your beautiful production infrastructure.” If you want to take your project or product into the hallowed grounds of what they’ve built, it has to be worthy. Infrastructure engineers will only evaluate, test and bestow their blessing after they themselves believe it.

The principles of the infrastructure engineer include the following:

  • Every implementation is different and requires skilled infrastructure engineers to ensure success.
  • Applications are driven by requirements and infrastructure engineers deliver the right product to meet the criteria.
  • The most convenient way to use the cloud is to do it yourself.

What do infrastructure engineers care about

Documentation and training

Having a clear understanding of every aspect of the infrastructure is key to making it work well, so comprehensive and clear documentation is a must. It also needs to be updated; as new product versions are released, the documentation should keep everyone aware of what has changed.

Version numbers

Products must be tested and validated before going into production, so infrastructure teams keep track of which versions are authorized for production; updates also need to be tested. A key part of testing is security and we generally stand behind the latest cutting edge so we have maximum stability and security.

Performance

Performance is also key. Our teams need to understand how the system works in various environments to plan for adequate capacity. Systems with highly variable performance characteristics, or that do not meet the minimum requirements, will never be implemented. New products must prove themselves in a combat trial before they are even considered.

Use of Services

Getting infrastructure up and running is a friction when building applications. Nothing is more important than the speed of getting an application to production. Operations teams love the nuances of how things work and take pride in running a well-oiled machine, but developers don’t have months to wait for that to happen. Winning against competitors means renting what you need, when you need it. Provide us with an API and a key and let us run. .

When it comes to infrastructure, developer principles include:

  • The infrastructure must comply with the app and not the other way around
  • Don’t invent new infrastructure, just combine what is available
  • It consumes compute, network and storage space like any other utility

Things that matter to consumers

Does it fit what I need and can I check it quickly?

The app is the center of the developer’s universe and what it needs is the requirement. If the considered service meets the criteria, it should be checked quickly. If an app spends a lot of time bending and twisting to get one service to work, developers will simply look for a different service that works better.

Cost

Developers want the lowest cost for what they get. Nothing so complicated as to require a spreadsheet. With services, developers don’t necessarily believe in “you get what you pay for” where more expensive is better. Instead, they expect the cost to decrease over time from a service provider who finds efficiencies.

Availability

Developers expect a service to always work, and when it doesn’t work, they get annoyed (like when the electricity goes out). Even if there is an SLA, it will most likely not read it and expect 100% uptime. While building my app, I assume there will be no downtime.

In the end, the app matters the most

In working with many organizations where applications are mission-critical, we’ve often seen that these two groups don’t work particularly well together: at times, their respective approaches can even be counterproductive. This friction can significantly slow application production and even hinder an organization’s move to the cloud.

This friction can manifest itself in several ways. For example, relying on homegrown infrastructure can limit how developers access the data needed to build applications. This can limit innovation and introduce complexity into the development process.

And sometimes balancing cloud services with purpose-built solutions can actually add complexity and drive up costs by watering down the expected savings from moving to the cloud.

Application development and deployment is cost sensitive, but requires speed and efficiency. Anything that gets in the way can lead to a diminished competitive advantage and even a loss of revenue.

However, we also know organizations that have intelligently combined the efforts of the infrastructure engineers who manage your mission-critical apps today and those who use the services to build them. When the perspective and expertise of each group is brought to bear, flexibility, cost efficiency and speed can result.

Many successful organizations today are implementing a hybrid of the two (for now): a custom infrastructure combined with services leased from a provider. Several organizations are leveraging Kubernetes in this quest for grand unified infrastructure theory. When describing a deployment model, there are blocks that create pods and service endpoints, with other blocks that describe the endpoints on a pay-per-use basis. If you’re using a cloud with Kubernetes, think about storage and networking services.

There are other important elements in an organization’s universe of services, whether they are created or purchased. Standard APIs are the de facto way to serve data to applications and reduce time to market by simplifying development. SLAs, both for customers and internals, also clearly outline scalability and other performance expectations, so developers don’t have to.

Finally, I’d like to point out that this is an immediate challenge in the open source data world I live in. I work with Apache Cassandra®—software that you can download and distribute for free in your datacenter; free as in beer and free as in liberty. I also work on the K8ssandra project, which helps builders deliver Cassandra as a service for their customers using Kubernetes. And DataStax, the company I work for, offers Astra DB based on Cassandra, which is a simple service for developers with no operations required. I understand the various points of view and I am glad that there is a choice.

Learn more about DataStax here.

About Patrick McFadin:

DataStax

DataStax

Patrick is the co-author of O’Reilly’s book “Managing Cloud Native Data on Kubernetes”. He works at DataStax in developer relations and as a contributor to the Apache Cassandra project. Previously he worked as the head of engineering and architecture for various Internet companies.

Leave a Reply

Your email address will not be published. Required fields are marked *