For the most part, I don't set up test environments anymore and I recommend against test teams installing and configuring systems under test -- unless the end user is expected to install and configure the system on their local machine.
The key to test environments is that they should be as similar as possible to production environments (in most cases). It makes no sense to me for someone to expect the test team to be able to install and configure an application on machines that don't match production (or the development environment for that matter) when the test team wasn't even involved in figuring out how to install and configure the system the first time. Whoever is installing and configuring the development and production environments should be installing and configuring the test environment.
If you are asking about setting up a test lab to test application compatibility across various platforms, browsers, etc., I've become a huge fan of virtual environments.
If you are asking about testing support environments (defect tracking systems, automation tools, requirements management systems, test management systems and so forth), that is entirely dependent on your corporate policies, project goals and the tools your team uses.
I can think of no significant difference for the Web versus for client-side applications, with the exception that client-side applications start with a CD/DVD and Web-based applications start with a download. The fact is that if stuff gets installed, you want to make sure it gets installed properly on all supported systems/configurations. Typically, that's not possible given the time constraints. In those situations I use sampling methods like all-pairs or orthogonal arrays to help me reduce the total sample size, while managing degree and type of coverage.
This was first published in February 2008