In the fast moving world of business technology and communications, computer networks form a significant part of IT infrastructure. Hardware and software must be high performance, flexible and stable – providing an uptime of almost 100 per cent, especially in customer-facing applications. Apart from high network speed, the uninterrupted availability of applications and 24/7 access to operational or transactional data are crucial.
However, a number of complex factors dominate modern IT infrastructure. These include growing data volumes, increasing numbers of end users and new devices and network protocols/applications. Additionally, companies are implementing cloud-based computing and virtualisation in a drive to increase flexibility and efficiency. IT infrastructure is expected to be flexible, dynamic and reliable in lean organisations.
With virtualisation technology, the days of over-provisioned servers and storage are no longer feasible; cost-conscious IT departments increasingly require their teams to obtain maximum return on hardware investment. Infrastructure can be under-utilised and enjoy the luxury of excess capacity for exceptional spikes in traffic, yet still not be optimally configured for best performance. Expertise is required throughout the lifecycle of any IT project.
Given that core IT services and a dependable infrastructure are business critical, how do IT senior managers, directors and decision makers measure IT performance in large organisations? Several measurement tools are specific to individual devices such as servers, network buses, storage systems etc. Although each of these individual areas gives a partial view, the results from each component cannot simply be added up to somehow obtain a valid index of overall infrastructure performance.
A more holistic approach that combines expertise in applications, databases, networks and storage will give a better indication. A good infrastructure performance management solution examines physical, virtual and cloud environments and correlates the results; reporting on system-wide performance. This means that bottlenecks can be identified and remedied, before any impact on user functionality is noticed.
Typical metrics include latency, reliability and security of the hardware inventory, taking into account the technology employed and its specific characteristics. Here, processor and storage capability and capacity are of key significance. Additionally, organisations can monitor the occurrence rate of hardware and software incidents, along with the percentage successfully resolved and the time taken. Periodic and exception reporting are useful management tools in this respect, with a focus on the customer – whether internal or external. A subjective measure of system agility can also be considered, such as data backup to the cloud, onsite devices or both. Finally, fiscal cost is also a relevant measure as expenditure on any business solution clearly has an effect on the bottom line.
In summary, IT infrastructure supports and accelerates organisational performance, thereby creating competitive advantage for the business. Performance management can be enhanced through an IPM platform, contributing to the wider success of the company by mitigating risk, improving agility and controlling costs.
About the Author:
Alex Hooper is the CTO at Cisilion joined in 2014, after previously running the global presales team at nscglobal. He brings over 15 years of experience of working for system integrators and service providers. His primary focus at Cisilion is developing a full breadth of networking and IT infrastructure services to our customers and has a passion for creating value for our customers.