Galin Iliev's blog

Software Architecture & Development

Virtual Machine's Network Adapter Hangs

I recently moved (and this blog) on a new Virtual Machine kindly provided by my friend Nanio Nanev and his system administration company PrimaNet Consult LTD.

The VM has Win2003 Web edition SP2 and it is very fast ( as it is hosted on monster hosting server ) but there is one nasty issue we are fighting with: The network adapter that is connected to WAN - external network and has real static IP address - hangs once in a while.

How is possible Intel 21140-Based PCI Fast Ethernet Adapter (Generic) Network adapter on Virtual machine to hangs?!?!

I was able to connect using internal network adapter and after disable and re-enable WAN it was fine for another 3-4 hours.

I've found a way to this by script - by using DevCon - command-line utility functions as an alternative to Device Manager (direct download link).

Using it this simple script does the job:

C:\Install\devcon disable PCI\VEN_1011&DEV_0009&SUBSYS_21140A00&REV_20\3&267A616A&0&50
C:\Install\devcon enable PCI\VEN_1011&DEV_0009&SUBSYS_21140A00&REV_20\3&267A616A&0&50

Note that device class can differ so the question "How did you get these?" comes naturally. Here is how you can list all devices from setup class:

c:\install\devcon listclass net

And here is the result in my case:


So doing this reset on certain period helps now but this is not the smartest solution. Does anyone have another idea?

Maintain Database Indexes

It is wide known that creating index on table column can speed up queries that has this column in it's Where clause. Table indexes are binary trees in most cases and they are stored in pages similar to stored data itself. Over time data changes which cause index changes and it require some sort of maintenance to keep database optimized and running as fast as possible.

Over time these modifications can cause the information in the index to become scattered in the database (fragmented). Fragmentation exists when indexes have pages in which the logical ordering, based on the key value, does not match the physical ordering inside the data file. Heavily fragmented indexes can degrade query performance and cause your application to respond slowly.*

This can be fixed by either rebuilding index (by dropping existing and create new one) or reorganize it (or defrag it).

Rebuild indexes

Rebuilding indexes can be done by either one of these

I won't cover the details as you can look them up on MSDN. Just my favorite way is like following:



The advantage is this operation is online - meaning you can query table during index rebuild.

Reorganizing indexes

Reorganizing an index defragments the leaf level of clustered and nonclustered indexes on tables and views by physically reordering the leaf-level pages to match the logical order (left to right) of the leaf nodes. Having the pages in order improves index-scanning performance. The index is reorganized within the existing pages allocated to it; no new pages are allocated. If an index spans more than one file, the files are reorganized one at a time. Pages do not migrate between files.

Reorganizing also compacts the index pages. Any empty pages created by this compaction are removed providing additional available disk space.*


There are two ways to perform index reorganization:

Again my preferable is this:


This is also online operation.

Note: Although both operation stated above should be online I've applied it on big tables (above 140M records on ~60 GB in two tables) and of course it was pretty I/O intensive which caused some delays in performed queries. Having in mind that default CommandTimeout in .NET Class Library is 30 seconds and application writing at least once per minute creates very challenging DB to maintain. Possible solution would be using MS SQL Server 2008 Resource Governor. Unfortunately the server was MS SQL 2005...

How to detect fragmentation

In order to apply techniques described above fragmentation should be detected. For this comes a new DMV (Dynamic Management View) sys.dm_db_index_physical_stats - that gives us fragmentation in percent (avg_fragmentation_in_percent).  

These are the recommendations depending on returned value in avg_fragmentation_in_percent column:

avg_fragmentation_in_percent value

Corrective statement

> 5% and < = 30%


> 30%



Using sys.dm_db_index_physical_stats could not be very useful when used by itself so I prefer using it together with system tables sys.tables and sys.indexes:

---=== get index fragmentation
SELECT a.index_id, as TableName, as IndexName, avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats(NULL,NULL,NULL,NULL,NULL) AS a
INNER JOIN sys.indexes AS i ON a.object_id = i.object_id AND a.index_id = i.index_id
join  sys.tables t on t.object_id=i.object_id
ORDER BY avg_fragmentation_in_percent DESC


The result is like this (executed in AdventureWorks):


 sys.dm_db_index_physical_stats can take time to execute so if you want to view all indexes with the table name this can be used:

select t.object_id, as TableName, as IndexName, i.type_desc as IndexType  
from sys.indexes i 
join  sys.tables t on t.object_id=i.object_id
where i.object_id >1000 
order by t.create_date asc

Which return following result:


And it can be used to generate detailed T-SQL queries for reorganizing indexes one at time:

select  'ALTER INDEX ' + + ' ON ' + + ' REORGANIZE;'
from sys.indexes i 
join  sys.tables t on t.object_id=i.object_id
where i.object_id >100 
order by t.create_date desc




So far we took a look at following

  • Detect index fragmentation
  • Rebuild indexes
  • Reorganize indexes
  • Use T-SQL to generate T-SQL to maintain indexes.

I hope this helps.

* quoted from MSDN article Reorganizing and Rebuilding Indexes.

Using Microsoft ADO.NET Data Services

Mike Flasko (ADO.NET Data Services, Program Manager) at Microsoft Corp. wrote an extensive article about recently released ADO.NET Data Services called Using Microsoft ADO.NET Data Services. It is full of examples and it is exactly the type of articles developers prefer to read :) - although slightly long.

The examples included are:

  • Example 1: Basic data service in C#
  • Example 2: ADO.NET Data Service exposing an in-memory data source
  • Example 3: Response for the root of a data service
  • Example 4: Listing of the contents of an entity set, in Atom/APP format
  • Example 5: Response for a single-entity URL
  • Example 6: A single-entity response from the data service
  • Example 7: A response that contains multiple entities
  • Example 8: Response with nested related entities using the "expand" option
  • Example 9: Atom service document as returned from an ADO.NET Data Service
  • Example 10: JSON response from a data service for a single 'Customer' entity
  • Example 11: A hierarchical result containing a Customer and its related Sales Orders, in JSON format
  • Example 12: Payloads for creating a new Category entity using Atom and JSON
  • Example 13: Response from the data service after processing a POST request for creating a Category, in Atom and JSON formats
  • Example 14: Payload used to modify an existing Category entity through an HTTP PUT request, Atom and JSON formats
  • Example 15: Payload used to modify an existing Category entity through an HTTP PUT request, Atom and JSON formats
  • Example 16: Payload to create a new Territory entity that includes an association to a Region entity
  • Example 17: Payload to update a Territory so it is associated with a different Region entity
  • Example 18: Keys-only payload format used for inserting an association
  • Example 19: Inserting a graph of data in a single request
  • Example 20: Request and Response using CategoryName as a concurrency token
  • Example 21: Request to update the name of a Category
  • Example 22: A data service operation to retrieve filtered customers
  • Example 23: Setting Visibility of Service Operations
  • Example 24: Access an Astoria data service from a .NET application using the client library
  • Example 25: Retrieving all customers in the city of London, ordered by company name
  • Example 26: Delay-loading related entities using the Load() method
  • Example 27: Using "expand" to eagerly-load related entities
  • Example 28: Inserting a new entity instance using the client library
  • Example 29: Updating an existing entity using the client library
  • Example 30: Creating a product entity and associate it with a Category
  • Example 31: Sending queries as a batch request
  • Example 32: Using the asynchronous API in the client library
  • Example 33: Setting service-wide access policy
  • Example 34: Query interceptor method implementing a custom, per request access policy
  • Example 35: Update interceptor method that validates input Category entities before being persisted in the underlying store
  • Example 36: Assume a validation error occurred while processing a request which caused an ArgumentException to be thrown invoking the exception handler shown in the ‘service code’ section.

Pretty long list, isn't it?!

For more details and code samples read article Using Microsoft ADO.NET Data Services on MSDN. on IIS7 performance data

You know, right? :) This the corporate web site of the biggest software company and the very wanted target of every hacker (or wanna-be hacker). When this website (or some other Microsoft websites like is down or show an unexpected error there are screenshots on the web (and blog posts) and this become a news of the day in software world :) - or at least on web dev world.

So now imagine of you are decision maker for hosting platform!? or hardware behind it?! or setting a bandwidth :)?! There is very little room for mistakes, huh?

And still is hosted on IIS7 .. since Beta3 (post is from June 15th, 2007). When Microsoft trust enough on IIS7 and host such important site on it, why you can't?

There is no doubt that the configuration behind is interesting so here is it:

=============== configuration ====================


  Model: HP DL585 G1 (4 dual-core CPUs)

  RAM: 32GB 


  Windows Server 2008 RTM (Build: 6.0.6001.18000) Enterprise version x6

  Number of clusters: 4 (in multiple datacenters)
  Machines in each cluster: 20
  Total machines: 80
Load Balancing:
  Hardware load balancing solution is used. The load balancing algorithm we are using is based on “Least Current Client Connections” to each load balanced member server of the cluster (not round robin, or other any other load balancing algorithms). The hardware load balancer will maintain the same number of current client connections to each member of the cluster. So if a W2K8 server is completing web requests faster than a W2K3 server, the load balancer will send more traffic to the server W2K8 RTM server.


Recently some performance data has been released on TechNet and here is what it says:

  • Win2008/IIS7 process more Requests per second(RPS) than Win2003/IIS6.
  • Due to #1 Win2008's CPU is more utilized.
  • As Win2008/IIS7 is performing better the load balancer send more requests to it.


Server Efficiency (RPS/ CPU %) – Efficiency of serving live web platform traffic

W2K3 SP2 4.36 “requests per CPU cycle”

W2K8 RTM 4.84 ~ 10.9% increased efficiency

CPU Utilization (%)

W2K3 SP2 44.8%

W2K8 RTM 52.8% ~ 17.9% degradation (This is impacted by the increased RPS the W2K8 servers are handling)

Web Service – Total Methods Requests/Sec (RPS)

W2K3 SP2 194

W2K8 RTM 255 ~ 31.4% more traffic is being sent to the W2K8 RTM servers

Web Service – Current Connections

W2K3 SP2 280

W2K8 RTM 294 ~ 5% increase

Load Balancing – Current Client Connections

W2K3 SP2 116

W2K8 RTM 116 Equal – as the hardware load balancer maintains the same amount of outstanding open client connections.

.NET CLR Memory – % Time in GC

W2K3 SP2 1.1%

W2K8 RTM 2.5% No significant degradation in “Time in GC


Source: Operations blog post on TechNet.

IIS7 is really next generation web platform...