The job of a network manager has never been more complicated than it is today. Servers, clients and everything in between have evolved immensely over the last 15 years. Thanks to this, many operational or customer-facing applications can be quickly and effectively used in enterprises. As an IT manager, you are expected to ensure that all the systems function in zero seconds, without any failure, at the lowest possible cost. How does one begin to plan this kind of investment?
The logical first step is to collect the system requirements for each application. On the server, how may users will access the service concurrently in the peak time of the day, week or month? What failures should be considered and bypassed? What OS and server software should run on the server to host the application? And with regard to the client, is there a need for a special software installation? Can it be reinstalled remotely when required? Can it run on any client OS? Where are the clients located?
Following this, the second step is to define the server infrastructure. How many such users can a single server handle? What kind of applications can be accelerated by a special hardware appliance? Should there be multiple sites for disaster recovery? What network pipe will the service consume?
Assuming that all these questions are addressed, you can design your network and server equipment to handle the full amount of transactions coming from the application clients. This is an art in itself that requires skill, experience and constant education, out of the scope of this article.
Are you done now? Certainly not. What are you forgetting? Security, of course.
All of the above considerations apply only to legitimate application traffic. But as we all know, today's networks carry heaps of illegitimate user traffic, and it would be naïve to assume that your application infrastructure will not be exposed to malicious use. You obviously have a firewall that controls the access to your network; however it isn't capable of differentiating legitimate and non-legitimate traffic to a single application, which leaves you vulnerable.
And when considering your application infrastructure investment, this traffic may change all of your calculations:
- A small burst of automatically generated traffic may consume a similar amount of server resources that are needed to handle all of the legitimate application traffic. Therefore, you may need to double your investment in servers.
- Handling such bursts, one of the servers may reach a point where all its resources are consumed and will have to be rebooted. Users on this server will be disconnected and their transactions will be lost. Storing and mirroring transaction states to overcome such a scenario may delay the introduction of applications and create additional complexities in the development of applications.
- That burst additionally consumes bandwidth that you would have to pay for, planning larger network pipes at your data center and office locations. Otherwise, users will experience latency and slow responsiveness due to network packet loss.
- An exploit of one server may require some time to fix and patch. Even though your infrastructure is smart to detect the server failure and bypass it, you still need to invest in the manpower for server maintenance.
These extra considerations and operational headaches can be solved by integrating security protection with your network infrastructure. Here is some useful advice for mechanisms that have to be implemented in your network:
- Limit the number of users and connections distributed to each of your servers so that they will never be overwhelmed with traffic. You may apply such limitations both on the number of connections or on the amount of bandwidth that the each server can handle. This will guarantee the health of transactions and save back-end synchronization. It's better to refuse additional connections than lose transactions that are already ongoing.
- Ensure that the network intelligently protects your application from floods of application requests - there are such devices that incorporate statistical algorithms that were found fit for blocking most of the flood traffic without eliminating valid transactions. This technology incorporates multiple processes that run in parallel to:
- Learn the behavior of the applications in terms of traffic pattern distribution.
- Continuously analyze the traffic to identify irregular traffic patterns that don't fit the learned patterns in a suspicious level and decide whether these suspicious patterns are the result of valid traffic loads or a methodological attack.
- Once an attack is detected, identify a footprint or multiple footprints of that attack that differentiate the attack traffic from valid traffic.
- Inspect the application traffic to block only these footprints that minimize the attack and do not interrupt with the regular traffic
- Use the network to inspect traffic at the application level to detect exploits of your application infrastructure – whatever software you use for your servers, including the OS and the server software, that software would have vulnerabilities. That's the nature of code development. Many of these vulnerabilities are documented, and can be exploited. A network IPS device inspects the application data, detects such exploits of documented vulnerabilities or suspiciously irregular application access attempts, blocks them and reports them. Such functionality saves the headache of continuously patching servers that were compromised. As the vulnerabilities are published quite often, the network IPS should be updated regularly with new sets of protection capabilities.
In conclusion, planning your application infrastructure to include security protection capabilities can save you a lot of resources and future investments. Without it, you may be opening yourself to unpredicted roadblocks to true application security.
About the author: Amir Peles is chief technical officer at Radware.