The technological landscape continues to evolve at a fantastic pace, and staying on top of it all can be challenging. In spite of the high rate of change I think there are some “timeless” lessons we’ve learned over the last two decades, lessons that will continue to be true for the foreseeable future. Here are three lessons that are part of our DNA today and are integrated in our daily thinking.

The first is that the demand for robust, high-performance Internet access and applications consistently increases. It never shrinks. Our clients today are getting much more comfortable taking their applications off-site and into the cloud, so reliable, fast, low-latency connections to the network are becoming increasingly vital to daily operations. Furthermore, our users are connecting to their data using a dizzying array of devices, applications, and APIs from a diverse number of geographic locations. This trend is only going to continue as more computing power is loaded into smartphones and tablets, and small-footprint IoT (Internet of Things) devices like Arduinos and Raspberry Pis multiply.

The second is that good data and application security cannot be an after-thought. Protecting data, and your users’ access to it, has to be an important element of the system from Day 1. Good security is not something you do once and then assume you’re done, nor is it something you bolt onto an already-built system. Good security requires processes that are enforced, systems and software that are monitored around the clock, and software updates and security patches — at least at the operating system level — for the lifespan of the application. Failing to take security seriously from the onset means that your critical systems might be exposed to potential compromise, and that critical business data might be corrupted or lost.

Thirdly, a tremendous amount of planning and care is needed to integrate new Internet services into a client’s enterprise with nearly zero downtime to the end user. This cannot be done haphazardly. It requires knowledge of a client’s working environments, their online habits, their schedules, their processes. It requires critical thinking and the judgment skills necessary to weigh competing priorities to help create installation plans that minimize negative ripple effects when new systems are brought online. It requires the ability to communicate excellently, both on a technical and an operational level. A client can’t have a positive technology experience if they don’t understand what’s going on, if they don’t know who is leading the project, or if they never know where they are in the process.

For the last few years I’ve used a line from a superhero movie to describe the importance of the role we at DataYard play on behalf of our clients: “With great power comes great responsibility.” We take the management of our entire infrastructure, and the management of individual client applications from end-to-end, very seriously. When you have the power to bring an enterprise’s technology to a screeching halt you tend to open technical doors very carefully. You only open those doors when you absolutely have to. You do it with a purpose, and you know — in advance — exactly what you’re going to do when you’re on the other side. To be careless with a client’s applications or data only invites disaster.

Nobody likes disasters, including technological disasters. Responsible technologists avoid disasters by first imagining all the things that could go wrong. Then they use their position and influence to mitigate those risks one by one through good processes, building in capacity and redundancy, and preparation prior to plan execution. To do anything less is a disservice to your users.

Recommended Posts