Welcome to the amazing dot net programming

Author: Vijaya Kumar
Contact:

    

  

Get updates by e-mail

HP Computer Museum

 

 

 

 

free website submission search engine seo optimization

 

Powered by Blogger

October 27, 2006

High-Performance Web Applications using ASP.NET

Developing High-Performance Web Applications with ASP.NET

Writing a Web application with ASP.NET is unbelievably easy. So easy, many developers don't take the time to structure their applications for great performance. This article won't be the definitive guide for performance-tuning Web applications-an entire book could easily be devoted to that. Instead, think of this as a good place to start.

You should think about the separation of your application into logical tiers. You might have heard of the term 3-tier (or n-tier) physical architecture. These are usually prescribed architecture patterns that physically divide functionality across processes and/or hardware. As the system needs to scale, more hardware can easily be added. There is, however, a performance hit associated with process and machine hopping, thus it should be avoided. So, whenever possible, run the ASP.NET pages and their associated components together in the same application. Because of the separation of code and the boundaries between tiers, using Web services or remoting will decrease performance by 20 percent or more. The data tier is a bit of a different beast since it is usually better to have dedicated hardware for your database. However, the cost of process hopping to the database is still high, thus performance on the data tier is the first place to look when optimizing your code.

Before diving in to fix performance problems in your applications, make sure you profile your applications to see exactly where the problems lie. Key performance counters (such as the one that indicates the percentage of time spent performing garbage collections) are also very useful for finding out where applications are spending the majority of their time. Yet the places where time is spent are often quite unintuitive.

There are two types of performance improvements described in this article: large optimizations, such as using the ASP.NET Cache, and tiny optimizations that repeat themselves. These tiny optimizations are sometimes the most interesting. You make a small change to code that gets called thousands and thousands of times. With a big optimization, you might see overall performance take a large jump. With a small one, you might shave a few milliseconds on a given request, but when compounded across the total requests per day, it can result in an enormous improvement.

Contents :
Performance on the Data Tier
Return Multiple Resultsets
Paged Data Access
Connection Pooling
ASP.NET Cache API
Per-Request Caching
Background Processing
Server Control View State
Page Output Caching and Proxy Servers
Run IIS 6.0 (If Only for Kernel Caching)

Performance on the Data Tier

When it comes to performance-tuning an application, there is a single litmus test you can use to prioritize work: does the code access the database? If so, how often? Note that the same test could be applied for code that uses Web services or remoting, too, but I'm not covering those in this article.

If you have a database request required in a particular code path and you see other areas such as string manipulations that you want to optimize first, stop and perform your litmus test. Unless you have an egregious performance problem, your time would be better utilized trying to optimize the time spent in and connected to the database, the amount of data returned, and how often you make round-trips to and from the database.

With that general information established, let's look at ten tips that can help your application perform better. I'll begin with the changes that can make the biggest difference.

Return Multiple Resultsets

Review your database code to see if you have request paths that go to the database more than once. Each of those round-trips decreases the number of requests per second your application can serve. By returning multiple resultsets in a single database request, you can cut the total time spent communicating with the database. You'll be making your system more scalable, too, as you'll cut down on the work the database server is doing managing requests.

While you can return multiple resultsets using dynamic SQL, I prefer to use stored procedures. It's arguable whether business logic should reside in a stored procedure, but I think that if logic in a stored procedure can constrain the data returned (reduce the size of the dataset, time spent on the network, and not having to filter the data in the logic tier), it's a good thing.

Using a SqlCommand instance and its ExecuteReader method to populate strongly typed business classes, you can move the resultset pointer forward by calling NextResult. Returning only the data you need from the database will additionally decrease memory allocations on your server.

Paged Data Access

The ASP.NET DataGrid exposes a wonderful capability: data paging support. When paging is enabled in the DataGrid, a fixed number of records is shown at a time. Additionally, paging UI is also shown at the bottom of the DataGrid for navigating through the records. The paging UI allows you to navigate backwards and forwards through displayed data, displaying a fixed number of records at a time.

There's one slight wrinkle. Paging with the DataGrid requires all of the data to be bound to the grid. For example, your data layer will need to return all of the data and then the DataGrid will filter all the displayed records based on the current page. If 100,000 records are returned when you're paging through the DataGrid, 99,975 records would be discarded on each request (assuming a page size of 25). As the number of records grows, the performance of the application will suffer as more and more data must be sent on each request.

One good approach to writing better paging code is to use stored procedures. The total number of records returned can vary depending on the query being executed. For example, a WHERE clause can be used to constrain the data returned. The total number of records to be returned must be known in order to calculate the total pages to be displayed in the paging UI. For example, if there are 1,000,000 total records and a WHERE clause is used that filters this to 1,000 records, the paging logic needs to be aware of the total number of records to properly render the paging UI.

Connection Pooling

Setting up the TCP connection between your Web application and SQL Server can be an expensive operation. Developers at Microsoft have been able to take advantage of connection pooling for some time now, allowing them to reuse connections to the database. Rather than setting up a new TCP connection on each request, a new connection is set up only when one is not available in the connection pool. When the connection is closed, it is returned to the pool where it remains connected to the database, as opposed to completely tearing down that TCP connection.

Of course you need to watch out for leaking connections. Always close your connections when you're finished with them. I repeat: no matter what anyone says about garbage collection within the Microsoft.NET Framework, always call Close or Dispose explicitly on your connection when you are finished with it. Do not trust the common language runtime (CLR) to clean up and close your connection for you at a predetermined time. The CLR will eventually destroy the class and force the connection closed, but you have no guarantee when the garbage collection on the object will actually happen.

To use connection pooling optimally, there are a couple of rules to live by. First, open the connection, do the work, and then close the connection. It's okay to open and close the connection multiple times on each request if you have to (optimally you apply Tip 1) rather than keeping the connection open and passing it around through different methods. Second, use the same connection string (and the same thread identity if you're using integrated authentication). If you don't use the same connection string, for example customizing the connection string based on the logged-in user, you won't get the same optimization value provided by connection pooling. And if you use integrated authentication while impersonating a large set of users, your pooling will also be much less effective. The .NET CLR data performance counters can be very useful when attempting to track down any performance issues that are related to connection pooling.

Whenever your application is connecting to a resource, such as a database, running in another process, you should optimize by focusing on the time spent connecting to the resource, the time spent sending or retrieving data, and the number of round-trips. Optimizing any kind of process hop in your application is the first place to start to achieve better performance. The application tier contains the logic that connects to your data layer and transforms data into meaningful class instances and business processes. For example, in Community Server, this is where you populate a Forums or Threads collection, and apply business rules such as permissions; most importantly it is where the Caching logic is performed.

ASP.NET Cache API

One of the very first things you should do before writing a line of application code is architect the application tier to maximize and exploit the ASP.NET Cache feature. If your components are running within an ASP.NET application, you simply need to include a reference to System.Web.dll in your application project. When you need access to the Cache, use the HttpRuntime.Cache property (the same object is also accessible through Page.Cache and HttpContext.Cache).

There are several rules for caching data. First, if data can be used more than once it's a good candidate for caching. Second, if data is general rather than specific to a given request or user, it's a great candidate for the cache. If the data is user- or request-specific, but is long lived, it can still be cached, but may not be used as frequently. Third, an often overlooked rule is that sometimes you can cache too much. Generally on an x86 machine, you want to run a process with no higher than 800MB of private bytes in order to reduce the chance of an out-of-memory error. Therefore, caching should be bounded. In other words, you may be able to reuse a result of a computation, but if that computation takes 10 parameters, you might attempt to cache on 10 permutations, which will likely get you into trouble. One of the most common support calls for ASP.NET is out-of-memory errors caused by overcaching, especially of large datasets.

Per-Request Caching

Earlier in the article, I mentioned that small improvements to frequently traversed code paths can lead to big, overall performance gains. One of my absolute favorites of these is something I've termed per-request caching.

Whereas the Cache API is designed to cache data for a long period or until some condition is met, per-request caching simply means caching the data for the duration of the request. A particular code path is accessed frequently on each request but the data only needs to be fetched, applied, modified, or updated once. This sounds fairly theoretical, so let's consider a concrete example.

In the Forums application of Community Server, each server control used on a page requires personalization data to determine which skin to use, the style sheet to use, as well as other personalization data. Some of this data can be cached for a long period of time, but some data, such as the skin to use for the controls, is fetched once on each request and reused multiple times during the execution of the request. To accomplish per-request caching, use the ASP.NET HttpContext. An instance of HttpContext is created with every request and is accessible anywhere during that request from the HttpContext.Current property. The HttpContext class has a special Items collection property; objects and data added to this Items collection are cached only for the duration of the request. Just as you can use the Cache to store frequently accessed data, you can use HttpContext.Items to store data that you'll use only on a per-request basis. The logic behind this is simple: data is added to the HttpContext.Items collection when it doesn't exist, and on subsequent lookups the data found in HttpContext.Items is simply returned.

Background Processing

The path through your code should be as fast as possible, right? There may be times when you find yourself performing expensive tasks on each request or once every n requests. Sending out e-mails or parsing and validation of incoming data are just a few examples.

When tearing apart ASP.NET Forums 1.0 and rebuilding what became Community Server, we found that the code path for adding a new post was pretty slow. Each time a post was added, the application first needed to ensure that there were no duplicate posts, then it had to parse the post using a "badword" filter, parse the post for emoticons, tokenize and index the post, add the post to the moderation queue when required, validate attachments, and finally, once posted, send e-mail notifications out to any subscribers. Clearly, that's a lot of work.

It turns out that most of the time was spent in the indexing logic and sending e-mails. Indexing a post was a time-consuming operation, and it turned out that the built-in System.Web.Mail functionality would connect to an SMTP server and send the e-mails serially. As the number of subscribers to a particular post or topic area increased, it would take longer and longer to perform the AddPost function. Indexing e-mail didn't need to happen on each request. Ideally, we wanted to batch this work together and index 25 posts at a time or send all the e-mails every five minutes.

We decided to use the same code I had used to prototype database cache invalidation for what eventually got baked into Visual Studio 2005. The Timer class, found in the System.Threading namespace, is a wonderfully useful, but less well-known class in the .NET Framework, at least for Web developers. Once created, the Timer will invoke the specified callback on a thread from the ThreadPool at a configurable interval. This means you can set up code to execute without an incoming request to your ASP.NET application, an ideal situation for background processing. You can do work such as indexing or sending e-mail in this background process too.

There are a couple of problems with this technique, though. If your application domain unloads, the timer instance will stop firing its events. In addition, since the CLR has a hard gate on the number of threads per process, you can get into a situation on a heavily loaded server where timers may not have threads to complete on and can be somewhat delayed. ASP.NET tries to minimize the chances of this happening by reserving a certain number of free threads in the process and only using a portion of the total threads for request processing. However, if you have lots of asynchronous work, this can be an issue.

Server Control View State

View state is a fancy name for ASP.NET storing some state data in a hidden input field inside the generated page. When the page is posted back to the server, the server can parse, validate, and apply this view state data back to the page's tree of controls. View state is a very powerful capability since it allows state to be persisted with the client and it requires no cookies or server memory to save this state. Many ASP.NET server controls use view state to persist settings made during interactions with elements on the page, for example, saving the current page that is being displayed when paging through data.

There are a number of drawbacks to the use of view state, however. First of all, it increases the total payload of the page both when served and when requested. There is also an additional overhead incurred when serializing or deserializing view state data that is posted back to the server. Lastly, view state increases the memory allocations on the server.

Several server controls, the most well known of which is the DataGrid, tend to make excessive use of view state, even in cases where it is not needed. The default behavior of the ViewState property is enabled, but if you don't need it, you can turn it off at the control or page level. Within a control, you simply set the EnableViewState property to false, or you can set it globally within the page using this setting:

<% @ Page EnableViewState="false" %>

If you are not doing postbacks in a page or are always regenerating the controls on a page on each request, you should disable view state at the page level.

Page Output Caching and Proxy Servers

ASP.NET is your presentation layer (or should be); it consists of pages, user controls, server controls (HttpHandlers and HttpModules), and the content that they generate. If you have an ASP.NET page that generates output, whether HTML, XML, images, or any other data, and you run this code on each request and it generates the same output, you have a great candidate for page output caching.

By simply adding this line to the top of your page

<% @ Page OutputCache VaryByParams="none" Duration="60" %>

you can effectively generate the output for this page once and reuse it multiple times for up to 60 seconds, at which point the page will re-execute and the output will once be again added to the ASP.NET Cache. This behavior can also be accomplished using some lower-level programmatic APIs, too. There are several configurable settings for output caching, such as the VaryByParams attribute just described. VaryByParams just happens to be required, but allows you to specify the HTTP GET or HTTP POST parameters to vary the cache entries. For example, default.aspx?Report=1 or default.aspx?Report=2 could be output-cached by simply setting VaryByParam="Report".

Additional parameters can be named by specifying a semicolon-separated list.

Many people don't realize that when the Output Cache is used, the ASP.NET page also generates a set of HTTP headers that downstream caching servers, such as those used by the Microsoft Internet Security and Acceleration Server or by Akamai. When HTTP Cache headers are set, the documents can be cached on these network resources, and client requests can be satisfied without having to go back to the origin server. Using page output caching, then, does not make your application more efficient, but it can potentially reduce the load on your server as downstream caching technology caches documents. Of course, this can only be anonymous content; once it's downstream, you won't see the requests anymore and can't perform authentication to prevent access to it.

Run IIS 6.0 (If Only for Kernel Caching)

If you're not running IIS 6.0 (Windows Server 2003), you're missing out on some great performance enhancements in the Microsoft Web server. In Tip 7, I talked about output caching. In IIS 5.0, a request comes through IIS and then to ASP.NET. When caching is involved, an HttpModule in ASP.NET receives the request, and returns the contents from the Cache.

If you're using IIS 6.0, there is a nice little feature called kernel caching that doesn't require any code changes to ASP.NET. When a request is output-cached by ASP.NET, the IIS kernel cache receives a copy of the cached data. When a request comes from the network driver, a kernel-level driver (no context switch to user mode) receives the request, and if cached, flushes the cached data to the response, and completes execution. This means that when you use kernel-mode caching with IIS and ASP.NET output caching, you'll see unbelievable performance results. At one point during the Visual Studio 2005 development of ASP.NET, I was the program manager responsible for ASP.NET performance. The developers did the magic, but I saw all the reports on a daily basis. The kernel mode caching results were always the most interesting. The common characteristic was network saturation by requests/responses and IIS running at about five percent CPU utilization. It was amazing! There are certainly other reasons for using IIS 6.0, but kernel mode caching is an obvious one.

October 24, 2006

ADO.NET FAQ

What is ADO.Net?

ADO.Net is an object oriented framework that allows you to interact with database systems. We usually interact with database systems through SQL queries or stored procedures. ADO.Net encapsulates our queries and commands to provide a uniform access to various database management systems.

ADO.Net is a successor of ADO (ActiveX Data Object). The prime features of ADO.Net are its disconnected data access architecture and XML integration.

What does it mean by disconnected data access architecture of ADO.Net?

ADO.Net introduces the concept of disconnected data architecture. In traditional data access components, you make a connection to the database system and then interact with it through SQL queries using the connection. The application stays connected to the DB system even when it is not using DB services.Your application automatically connects to the database server when it needs to pass some query and then disconnects immediately after getting the result back and storing it in dataset. This design of ADO.Net is called disconnected data architecture and is very much similar to the connection less services of http over the internet. It should be noted that ADO.Net also provides the connection oriented traditional data access services.

Important aspect of the disconnected architecture is that it maintains the local repository of data in the dataset object. The dataset object stores the tables, their relationship and different constraints. The user performs operations like update, insert, delete to this dataset locally and finally the changed dataset is stored in actual database as a batch when needed. This greatly reduces the network traffic and results in the better performance.

What does it mean by connected data access architecture of ADO.Net?

In the connected environment, it is your responsibility to open and close the database connection. You first establish the database connection, perform the interested operations to the database and when you are done, close the database connection. All the changes are done directly to the database and no local (memory) buffer is maintained.

What is a dataset?

A dataset is the local repository of the data used to store the tables and disconnected record set. When using disconnected architecture, all the updates are made locally to dataset and then the updates are performed to the database as a batch.

What is a data adapter?

A data adapter is the component that exists between the local repository (dataset) and the physical database. It contains the four different commands (SELECT, INSERT, UPDATE and DELETE). It uses these commands to fetch the data from the DB and fill into the dataset and to perform updates done in the dataset to the physical database. It is the data adapter that is responsible for opening and closing the database connection and communicates with the dataset.

What is a data reader?

The data reader is a component that reads the data from the database management system and provides it to the application. The data reader works in the connected manner; it reads a record from the DB, pass it to the application, then reads another and so on.

What is a database command?

A database command specifies which particular action you want to perform to the database. The commands are in the form of SQL (Structured Query Language).

How do different components of ADO.Net interact with each other in disconnected architecture?

The Data Adapter contains in it the Command and Connection object. It uses the connection object to connect to the database, execute the containing command, fetch the result and update the DataSet.

How do different components of ADO.Net interact with each other in connected architecture?

The Command object contains the Connection object. The Command object uses the containing connection (that must be opened) to execute the SQL query and if the SQL statement is SELECT, returns the DataReader object. The data reader object is the stream to the database which reads the resulting records from the DB and passes them to the application

What's the difference between accessing data with dataset or data reader?

The dataset is generally used when you like to employ the disconnected architecture of the ADO.Net. It reads the data into the local memory buffer and perform the data operations (update, insert, delete) locally to this buffer.

The data reader, on the other hand, is directly connected to the database management system. It passes all the queries to the database management system, which executes them and returns the result back to the application.

Since no memory buffer is maintained by the data reader, it takes up fewer resources and performs more efficiently with small number of data operations. The dataset, on the other hand is more efficient when large number of updates are to be made to the database. All the updates are done in the local memory and are updated to the database in a batch. Since database connection remains open for the short time, the database management system does not get flooded with the incoming requests.

What are the performance considerations when using dataset?

Since no memory buffer is maintained by the data reader, it takes up fewer resources and performs more efficiently with small number of data operations. The dataset, on the other hand is more efficient when large number of updates are to be made to the database. All the updates are done in the local memory and are updated to the database in a batch. Since database connection remains open for the short time, the database management system does not get flooded with the incoming requests.

However, since the dataset stores the records in the local buffer in the hierarchical form, it does take up more resources and may affect the overall performance of the application.


How to select dataset or data reader?

The data reader is more useful when you need to work with large number of tables, database in non-uniform pattern and you need not execute the large no. of queries on few particular table.

When you need to work on fewer no. of tables and most of the time you need to execute queries on these fewer tables, you should go for the dataset.

It also depends on the nature of application. If multiple users are using the database and the database needs to be updated every time, you must not use the dataset. For this, .Net provides the connection oriented architecture. But in the scenarios where instant update of database is not required, dataset provides optimal performance by making the changes locally and connecting to database later to update a whole batch of data. This also reduces the network bandwidth if the database is accessed through network.

Disconnected data access is suited most to read only services. On the down side, disconnected data access architecture is not designed to be used in the networked environment where multiple users are updating data simultaneously and each of them needs to be aware of current state of database at any time (e.g., Airline Reservation System).

How XML is supported in ADO.Net?

The dataset is represented in the memory as an XML document. You can fill the dataset by XML and can also get the result in the form of XML. Since XML is an international and widely accepted standard, you can read the data using the ADO.Net in the XML form and pass it to other applications using Web Service. These data consuming application need not be the essentially Dot Net based. They may be written with Java, C++ or any other programming language and running on any platform.

How to get the count of records in the Database table using the DataSet?

VB.NET
ds.Tables(0).Rows.Count

C#
ds.Tables[0].Rows.Count ;

How to check if the Dataset has records?

VB.NET
if ds.Tables(0).Rows.Count= 0 then
'No record
else
'Record Found
end if

C#
if (ds.Tables[0].Rows.Count == 0 )
{
//No record
}
else
{
//Record Found
}

How to retrieve value of a field in a dataset?

VB.NET
ds.Tables("TableName").Rows(0)("ColumnName")

C#
ds.Tables["TableName"].Rows[0]["ColumnName"];

where TableName and ColumnName could be also integer (not in quotes then) to indicate you refer to the table's or column's index position. Rows(0) indicates the first and only row in DataTable's Rows collection

How to filter the data in the DataView and display it in some DataControl?

VB.NET
Dim thefilter as string = "fieldname='' "
dbDataView.RowFilter = thefilter
Repeater1.DataSource = dbDataView
Repeater.DataBind()

C#
string thefilter = "fieldname='' ";
dbDataView.RowFilter = thefilter;
Repeater1.DataSource = dbDataView;
Repeater.DataBind();

How to truncate the data in the column?

VB.NET
Protected function TruncateData( Byval strNotes as string)
If strNotes.Length > 20 then
Return strNotes.Substring(0,20) + "..."
Else
return strnotes
End function

C#
protected string TruncateData( string strNotes )
{
if (strNotes.Length > 20)
{
return strNotes.Substring(0,20) + "...";
}
else
{
return strNotes;
}
}

How to find the null fields in the datareader?

VB.NET
If dbReader("fieldname").Tostring= DBnull.Value.ToString()
'Empty field value
Else
'Display value
End if

C#
if (dbReader["fieldname").ToString() == DBNull.Value.ToString() )
{
//Empty field value
}
else
{
//display Value
}

How to query the database to get all the Table names?

SELECT * FROM information_schema.tables where Table_type='BASE TABLE'

A field with bit data type value when displayed on a web page shows true/ false how to display a bit value as 1/0?

VB.NET
'Using DataReader
While dr.Read()
Response.Write((dr("ProductName") + " "))
Response.Write((Convert.ToInt16(dr("discontinued")) + " "))
End While

C#
//Using DataReader
while (dr.Read ())
{
Response.Write (dr["ProductName"] + " ");
Response.Write (Convert.ToInt16 ( dr["discontinued"]) + " ");
}

How to get the count of items in a dataReader?

VB.NET
Dim mycn As New SqlConnection("server=localhost;uid=sa;password=;database=northwind;")

Dim mycmd As New SqlCommand("Select * from Products", mycn)
mycn.Open()
Dim dr As SqlDataReader = mycmd.ExecuteReader
Dim i As Integer
While dr.Read
i += 1
End While
Response.Write("Count of Records : " & i)

C#
SqlConnection mycn =new SqlConnection("server=localhost;uid=sa;password=;database=northwind;");

SqlCommand mycmd = new SqlCommand ("Select * from Products", mycn);
mycn.Open();
SqlDataReader dr = mycmd.ExecuteReader();
int i=0;
while(dr.Read())
{
i+=1;
}
Response.Write("Count of Records : " + i.ToString());

How to filter xml data and display data in the DataGrid?

VB.NET
Dim ds As New DataSet
ds.ReadXml(Server.MapPath("data1.xml"))
Dim dv As New DataView
dv = ds.Tables(0).DefaultView
dv.RowFilter = "prodId='product2-00'"
Me.DataGrid1.DataSource = dv
Me.DataBind()

C#
DataSet ds = new DataSet();
ds.ReadXml(Server.MapPath("data1.xml"));
DataView dv = new DataView();
dv = ds.Tables[0].DefaultView;
dv.RowFilter = "prodId='product2-00'";
this.DataGrid1.DataSource = dv;
this.DataBind();

Why do I get the error message "ExecuteReader requires an open and available Connection. The connection's current state is Closed"?

This error is caused if you have not opened the connection. Before you read the data using DataReader open the Connection

I get the error message "Keyword not supported: 'provider'", when using Sql Server why?

If you are using SqlConnection then the connection string should be as follows:
server=localhost;uid=sa;password=;database=northwind
i.e
server=;uid=;password=;database="

For SqlConnection we do not provide a Provider . Provider is used in cases where OleDbConnection is used.

I get the error message "Cast from type DBNull to type String is not valid." when I try to display DataReader values on form?

VB.NET
If dbReader("fieldname").ToString= DBnull.Value.ToString()
'Empty field value
Else
'Display value
End if

C#
if (dbReader["fieldname").ToString() == DBNull.Value.ToString() )
{
//Empty field value
}
else
{
//display Value
}

What is the significance of CommandBehavior.CloseConnection?

To avoid having to explicitly close the connection associated with the command used to create either a SqlDataReader or and OleDbDataReader, pass the CommandBehavior.CloseConnection argument to the ExecuteReader method of the Connection. i.e

VB.NET
dr= cmd.ExecuteReader(CommandBehavior.CloseConnection)

C#
dr= cmd.ExecuteReader(CommandBehavior.CloseConnection);

The associated connection will be closed automatically when the Close method of the Datareader is called. This makes it all the more important to always remember to call Close on your datareaders.

How to loop through a Dataset to display all records?

VB.NET
'Fill Dataset
Dim dc As DataColumn
Dim dr As DataRow
For Each dr In ds.Tables(0).Rows
For Each dc In ds.Tables(0).Columns
Response.Write(dr(dc.ColumnName).ToString())
Next
Next

C#
//Fill the DataSet
foreach (DataRow dr in ds.Tables[0].Rows)
{
foreach( DataColumn dc in ds.Tables[0].Columns)
{
Response.Write(dr[dc.ColumnName].ToString());
}
}

What is connection pooling?

Connection pooling enables an application to use a connection from a pool of connections that do not need to be re-established for each use. Once a connection has been created and placed in a pool, an application can reuse that connection without performing the complete connection creation process.

When a user request a connection, it is returned from the pool rather than establishing new connection and, when a user releases a connection, it is returned to the pool rather than being released.

When is the connection pool created ?

When a connection is opened for the first time a connection pool is created and the pool is determined by the exact match of the connection string in the connection. Each connection pool is associated with a distinct connection string. When a new connection is requested, if the connection string is not an exact match to an existing pool, a new pool is created.

When is the connection pool destroyed ?

When last connection in the pool is closed the pool is destroyed.

What happens when all the connections in the connection pool are consumed and a new connection request comes?

If the maximum pool size has been reached and no usable connection is available, the request is queued. The connection pooler satisfies these requests by reallocating connections as they are released back into the pool. Connections are released back into the pool when you call Close or Dispose on the Connection.

How can I enable connection pooling ?

For .Net applications it is enabled by default. Well, to make sure the same we can use the Pooling=true; in the connection string for the SQLConnection Object.

How can I disable connection pooling?

ADO.NET Data Providers automatically use connection pooling turned on. If you want to turn this functionality off:

In an SQLConnection object, Add this to the connection string:

Pooling=False;

In An OLEDBConnection object, add this:

OLE DB Services=-4;

This way, the OLE DB data provider will mark your connection so that it does not participate in connection pooling.

What is a stored procedure?

A stored procedure is a precompiled executable object that contains one or more SQL statements. A stored procedure may be written to accept inputs and return output.

Google
 
Web dotnetlibrary.blogspot.com