Many organisations are trying to update their infrastructure to improve efficiency and manage rising costs. But the path is rarely simple. Hybrid setups, legacy systems, and new demands from AI in the enterprise often create trade-offs for IT teams.

Recent moves by Microsoft and several storage and data-platform vendors highlight how enterprises are trying to deal with these issues, and what other companies can learn from them as they plan their own enterprise AI strategies.

Modernisation often stalls when costs rise

Many businesses want the flexibility of cloud computing but still depend on systems built on virtual machines and years of internal processes. A common problem is that older applications were never built for the cloud. Rewriting them can take time and create new risks. But a simple “lift and shift” move often leads to higher bills, especially when teams do not change how the workloads run.

Some vendors are trying to address this by offering ways to move virtual machines to Azure without major changes. Early users say the draw is the chance to test cloud migration without reworking applications on day one. For some, this early testing is tied to preparing systems that will later support enterprise AI workloads.

They also point to lower storage costs when managed through Azure’s own tools, which helps keep the move predictable. The key lesson for other companies is to look for migration paths that match their existing operations instead of forcing a full rebuild from the start.

Data protection and control remain top concerns in hybrid environments

The risk of data loss or long outages still keeps many leaders cautious about large modernisation plans. Some organisations are now building stronger recovery systems in on-premises, edge, and cloud locations. Standard planning now includes features like immutable snapshots, replication, and better visibility of compromised data.

A recent integration between Microsoft Azure and several storage systems seeks to give companies a way to manage data in on-premises hardware and Azure services. Interest has grown among organisations that need local data residency or strict compliance rules. These setups let them keep sensitive data in-country while still working with Azure tools, which is increasingly important as enterprise AI applications depend on reliable and well-governed data.

For businesses facing similar pressures, the main takeaway is that hybrid models can support compliance needs when the control layer is unified.

Preparing for AI often requires stronger data foundations, not a full rebuild

Many companies want to support AI projects but don’t want to overhaul their entire infrastructure. Microsoft’s SQL Server 2025 adds vector database features that let teams build AI-driven applications without switching platforms. Some enterprises have paired SQL Server with high-performance storage arrays to improve throughput and reduce the size of AI-related data sets. The improvements are becoming part of broader enterprise AI planning.

Teams working with these setups say the attraction is the chance to run early AI workloads without committing to a new stack. They also report that more predictable performance helps them scale when teams begin to train or test new models. The larger lesson is that AI readiness often starts with improving the systems that already hold business data instead of adopting a separate platform.

Managing Kubernetes alongside older systems introduces new complexity

Many enterprises now run a mix of containers and virtual machines. Keeping both in sync can strain teams, especially when workloads run in more than one cloud. Some companies are turning to unified data-management tools that allow Kubernetes environments to sit alongside legacy applications.

One example is the growing use of Portworx with Azure Kubernetes Service and Azure Red Hat OpenShift. Some teams use it to move VMs into Kubernetes through KubeVirt while keeping familiar workflows for automation. The approach aims to reduce overprovisioning and make capacity easier to plan. For others, it is part of a broader effort to make their infrastructure ready to support enterprise AI initiatives. It also gives companies a slower, safer path to container adoption. The broader lesson is that hybrid container strategies work best when they respect existing skills rather than forcing dramatic shifts.

A clearer path is emerging for companies planning modernisation

Across these examples, a common theme stands out: most enterprises are not trying to rebuild everything at once. They want predictable migration plans, stronger data protection, and practical ways to support early AI projects. The tools and partnerships now forming around Azure suggest that modernisation is becoming less about replacing systems and more about improving what is already in place.

Companies that approach modernisation in small, steady steps – while keeping cost, security, and data needs in view – may find it easier to move forward without taking on unnecessary risk.

See also: Bain & Company issues AI Guide for CEOs, opens Singapore hub

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



Source link


Leave a Reply

Your email address will not be published. Required fields are marked *