Wednesday, August 26, 2009

Differences Between Shadowing and Overriding

Both Overriding and Shadowing are ways to alter the behaviour of members of a base class. Shadowing is a VB.NET concept. In C#, this concept is called Hiding, though there is a difference between the two.
When we do shadowing, we provide a new implementation to the base class member without overriding it. We may shadow a base class member in a derived class, by using the keyword shadows. The access level, return type, and the signature (means the datatypes of the arguments passed & the order of the types) of the derived class members which are shadowed, may differ from the base class.
In C#, we may achieve shadowing using the keyword new. However, when Hiding in C#, the access level, the signature, return type of the derived class must be same as the base class.
Overriding is the concept of providing a new implementation of derived class member as compared to its based class. In VB.NET, we do overriding using the overrides keyword, while in C#, overriding is achieved using the override keyword. For a class member to be overridable, we use the keyword virtual while defining it (in C#), and we use the keyword overridable (in VB.NET), though if we leave out specifying the overridable keyword, the member is overridable by default.
You can refer the MSDN also: http://msdn.microsoft.com/en-us/library/ms172785(VS.80).aspx for table of comparison.

Shallow copying/Deep copying/Object Cloning in C#

When we set Object2=Object1 it is by reference. While we do Object2= Object1.MemberwiseClone() then it will copy the object by value; mean any changes to Object2 will not reflect to Object1.

The MemberwiseClone method creates a shallow copy by creating a new object, and then copying the nonstatic fields of the current object to the new object. If a field is a value type, a bit-by-bit copy of the field is performed. If a field is a reference type, the reference is copied but the referred object is not; therefore, the original object and its clone refer to the same object.

using System;

class MyBaseClass {
public static string CompanyName = "My Company";
public int age;
public string name;
}
class MyDerivedClass: MyBaseClass {
static void Main() {
// Creates an instance of MyDerivedClass and assign values to its fields.
MyDerivedClass m1 = new MyDerivedClass();
m1.age = 28;
m1.name = "Pradeep";
// Performs a shallow copy of m1 and assign it to m2.
MyDerivedClass m2 = (MyDerivedClass) m1.MemberwiseClone();
    }
}

Shallow copying means that the copied object's fields will reference the same objects as the original object. To allow shallow copying, add the following Clone method to your class:


using System;
using System.Collections;
using System.Collections.Generic;
public class ShallowClone : ICloneable
{
public int data = 1;
public List listData = new List();
public object objData = new object();
public object Clone()
{
return (this.MemberwiseClone());
}
}
Deep copying or cloning means that the copied object's fields will reference new copies of the original object's fields. This method of copying is more time-consuming than the shallow copy. To allow deep copying, add the following Clone method to your class:

using System;
using System.Collections;
using System.Collections.Generic;
using System.Runtime.Serialization.Formatters.Binary;
using System.IO;
[Serializable]
public class DeepClone : ICloneable
{
public int data = 1;
public List listData = new List();
public object objData = new object();
public object Clone()
{
BinaryFormatter BF = new BinaryFormatter();
MemoryStream memStream = new MemoryStream();
BF.Serialize(memStream, this);
memStream.Position = 0;
return (BF.Deserialize(memStream));
}
}
Cloning is the ability to make an exact copy (a clone) of an instance of a type. Cloning may take one of two forms: a shallow copy or a deep copy. Shallow copying is relatively easy. It involves copying the object that the Clone method was called on.

The reference type fields in the original object are copied over, as are the value-type fields. This means that if the original object contains a field of type StreamWriter, for instance, the cloned object will point to this same instance of the original object's StreamWriter; a new object is not created.
Support for shallow copying is implemented by the MemberwiseClone method of the Object class, which serves as the base class for all .NET classes. So the following code allows a shallow copy to be created and returned by the Clone method:

public object Clone( )  {return (this.MemberwiseClone( ));}
Making a deep copy is the second way of cloning an object. A deep copy will make a copy of the original object just as the shallow copy does. However, a deep copy will also make separate copies of each reference type field in the original object. Therefore, if the original object contains a StreamWriter type field, the cloned object will also contain a StreamWriter type field, but the cloned object's StreamWriter field will point to a new StreamWriter object, not the original object's StreamWriter object.
Support for deep copying is not automatically provided by the Clone method or the .NET Framework. Instead, the following code illustrates an easy way of implementing a deep copy:
BinaryFormatter BF = new BinaryFormatter( );
MemoryStream memStream = new MemoryStream( );
BF.Serialize(memStream, this);
memStream.Flush( );
memStream.Position = 0;
return (BF.Deserialize(memStream));
Basically, the original object is serialized out to a memory stream using binary serialization, then it is deserialized into a new object, which is returned to the caller. Note that it is important to reposition the memory stream pointer back to the start of the stream before calling the Deserialize method; otherwise, an exception indicating that the serialized object contains no data will be thrown.
Performing a deep copy using object serialization allows the underlying object to be changed without having to modify the code that performs the deep copy. If you performed the deep copy by hand, you'd have to make a new instance of all the instance fields of the original object and copy them over to the cloned object. This is a tedious chore in and of itself. If a change is made to the fields of the object being cloned, the deep copy code must also change to reflect this modification. Using serialization, you rely on the serializer to dynamically find and serialize all fields contained in the object. If the object is modified, the serializer will still make a deep copy without any code modifications.

ASP.NET Page Life Cycle

Introduction

This article describes the life cycle of the page from the moment the URL is hit from the web browser till the HTML code is generated and sent to the web browser. Let us start by looking at some keywords that are involved in the life cycle of the page.
Background
IIS: IIS (Internet Information Server) is a complete Web server that makes it possible to quickly and easily deploy powerful Web sites and applications. It is the default web server used with .NET. When a Web server (for ASP.NET applications, typically IIS) receives a request, it examines the file-name extension of the requested file, determines which ISAPI extension should handle the request, and then passes the request to the appropriate ISAPI extension. (By default, ASP.NET handles file name extensions that have been mapped to it, such as .aspx, .ascx, .ashx, and .asmx.)
Note:
a. If a file name extension has not been mapped to ASP.NET, ASP.NET will not receive the request. It will be handled by the IIS. The requested page/image/file is returned without any processing.
b. If you create a custom handler to service a particular file name extension, you must map the extension to ASP.NET in IIS and also register the handler in your application's Web.config file.
ASPNET_ISAPI.DLL: This dll is the ISAPI extension provided with ASP.NET to process the web page requests. IIS loads this dll and sends the page request to this dll. This dll loads the HTTPRuntime for further processing.
ASPNET_WP.EXE: Each worker process (ASPNET_WP.EXE) contains an Application Pool. Each Application Pool can contain any number of Applications. Application Pool is also called as AppDomain. When a web page is requested, IIS looks for the application pool under which the current application is running and forwards the request to respective worker process.
HTTP Pipeline: HTTP Pipeline is the general-purpose framework for server-side HTTP programming that serves as the foundation for ASP.NET pages as well as Web Services. All the stages involved from creating HTTP Runtime to HTTP Handler is called HTTP Pipeline.
HTTP Runtime: Each AppDomain has its own instance of the HttpRuntime class—the entry point in the pipeline. The HttpRuntime object initializes a number of internal objects that will help carry the request out. The HttpRuntime creates the context for the request and fills it up with any HTTP information specific to the request. The context is represented by an instance of the HttpContext class. Another helper object that gets created at such an early stage of the HTTP runtime setup is the text writer—to contain the response text for the browser. The text writer is an instance of the HttpWriter class and is the object that actually buffers any text programmatically sent out by the code in the page. Once the HTTP runtime is initialized, it finds an application object to fulfill the request. The HttpRuntime object examines the request and figures out which application it was sent to (from the pipeline's perspective, a virtual directory is an application).
HTTP Context: This is created by HTTP Runtime. The HttpContext class contains objects that are specific to the current page request, such as the HttpRequest and HttpResponse objects. You can use this class to share information between pages. It can be accessed with Page.Context property in the code.
HTTP Request: Provides access to the current page request, including the request headers, cookies, client certificate, query string, and so on. You can use this class to read what the browser has sent. It can be accessed with Page.Request property in the code.
HTTP Response: Provides access to the output stream for the current page. You can use this class to inject text into the page, to write cookies, and more. It can be accessed with Page.Response property in the code.
HTTP Application: An application object is an instance of the HttpApplication class—the class behind the global.asax file. HTTPRuntime uses HttpApplicationFactory to create the HTTPApplication object. The main task accomplished by the HTTP application manager is finding out the class that will actually handle the request. When the request is for an .aspx resource, the handler is a page handler—namely, an instance of a class that inherits from Page. The association between types of resources and types of handlers is stored in the configuration file of the application. More exactly, the default set of mappings is defined in the section of the machine.config file. However, the application can customize the list of its own HTTP handlers in the local web.config file. The line below illustrates the code that defines the HTTP handler for .aspx resources.
HttpApplicationFactory: Its main task consists of using the URL information to find a match between the virtual directory of the URL and a pooled HttpApplication object.
HTTP Module: An HTTP module is an assembly that is called on every request that is made to your application. HTTP modules are called as part of the ASP.NET request pipeline and have access to life-cycle events throughout the request. HTTP modules let you examine incoming and outgoing requests and take action based on the request. They also let you examine the outgoing response and modify it. ASP.NET uses modules to implement various application features, which includes forms authentication, caching, session state, and client script services. In each case, when those services are enabled, the module is called as part of a request and performs tasks that are outside the scope of any single page request. Modules can consume application events and can raise events that can be handled in the Global.asax file.
HTTP Handler: An ASP.NET HTTP handler is the process that runs in response to a request that is made to an ASP.NET Web application. The most common handler is an ASP.NET page handler that processes .aspx files. When users request an .aspx file, the request is processed by the page handler. We can write our own handler and handler factory if we want to handle the page request in a different manner.
Note: HTTP modules differ from HTTP handlers. An HTTP handler returns a response to a request that is identified by a file name extension or family of file name extensions. In contrast, an HTTP module is invoked for all requests and responses. It subscribes to event notifications in the request pipeline and lets you run code in registered event handlers. The tasks that a module is used for are general to an application and to all requests for resources in the application.
Life Cycle of Page
1. Web page request comes from browser.
2. IIS maps the ASP.NET file extensions to ASPNET_ISAPI.DLL, an ISAPI extension provided with ASP.NET.
3. ASPNET_ISAPI.DLL forwards the request to the ASP.NET worker process (ASPNET_WP.EXE or W3P.EXE).
4. ISAPI loads HTTPRuntime and passes the request to it. Thus, HTTP Pipelining has begun.
5. HTTPRuntime uses HttpApplicationFactory to either create or reuse the HTTPApplication object.
6. HTTPRuntime creates HTTPContext for the current request. HTTPContext internally maintains HTTPRequest and HTTPResponse.
7. HTTPRuntime also maps the HTTPContext to the HTTPApplication which handles the application level events.
8. HTTPApplication runs the HTTPModules for the page requests.
9. HTTPApplication creates HTTPHandler for the page request. This is the last stage of HTTPipelining.
10. HTTPHandlers are responsible to process request and generate corresponding response messages.
11. Once the request leaves the HTTPPipeline, page level events begin.
12. Page Events are as follows: PreInit, Init, InitComplete, PreLoad, Load, Control evetns (Postback events), Load Complete, PreRender, SaveStateComplete, Render and Unload.
13. HTTPHandler generates the response with the above events and sends back to the IIS which in turn sends the response to the client browser.
Events in the life cycle of page
PreInit: All the Pre and Post events are introduced as part of .NET Framework 2.0. As the name suggests this event is fired before the Init method is fired. Most common functionalities implemented in this method include
a. Check the IsPostBack property
b. Set the master page dynamically
c. Set the theme property of the page dynamically
d. Read or Set the profile property values.
e. Re-create the dynamic controls
Init: This event is raised after all controls in the page are initialized and any skin settings have been applied. This event is used to read or initialize control properties. It can be used to register events for some controls for which the events are not specified in the aspx page.
Ex: OnClick event of the Button can be registered in the Init rather than specifying in the OnClick property of the Button in the aspx page.
InitComplete: Use this event for processing tasks that require all initialization be complete.
PreLoad: Use this event if you need to perform processing on your page or control before the Load event. After the Page raises this event, it loads view state for itself and all controls, and then processes any postback data included with the Request instance.
Load: The Page calls the OnLoad event method on the Page, then recursively does the same for each child control, which does the same for each of its child controls until the page and all controls are loaded. Use the OnLoad event method to set properties in controls and establish database connections.
Control events: Use these events to handle specific control events, such as a Button control's Click event or a TextBox control's TextChanged event.
LoadComplete: Use this event for tasks that require that all other controls on the page be loaded.
PreRender: This is the last event raised before the HTML code is generated for the page. The PreRender event also occurs for each control on the page. Use the event to make final changes to the contents of the page or its controls.
SaveStateComplete: Before this event occurs, ViewState has been saved for the page and for all controls. Any changes to the page or controls at this point will be ignored.
Use this event perform tasks that require view state to be saved, but that do not make any changes to controls.
Render: This is the stage where the HTML code for the page is rendered. The Page object calls the Render method of each control at this stage. All ASP.NET Web server controls have a Render method that writes out the control's markup that is sent to the browser.
UnLoad: This event occurs for each control and then for the page. In controls, use this event to do final cleanup for specific controls, such as closing control-specific database connections.
For the page itself, use this event to do final cleanup work, such as closing open files and database connections, or finishing up logging or other request-specific tasks.

Tuesday, August 25, 2009

Scrum

Scrum is an iterative incremental process of software development commonly used with agile software development. Despite the fact that "Scrum" is not an acronym, some companies implementing the process have been known to adhere to an all capital letter expression of the word, i.e. SCRUM. This may be due to one of Ken Schwaber's early papers capitalizing SCRUM in the title.
Meetings
Daily Scrum
Each day during the sprint, a project status meeting occurs. This is called a "scrum", or "the daily standup". The scrum has specific guidelines:
• The meeting starts precisely on time. Often there are team-decided punishments for tardiness (e.g. money, push-ups, hanging a rubber chicken around your neck.)
• All are welcome, but only "pigs" may speak.
• The meeting is timeboxed at 15-20 minutes depending on the team's size.
• All attendees should stand (it helps to keep meeting short).
• The meeting should happen at the same location and same time every day.
During the meeting, each team member answers three questions:
• What have you done since yesterday?
• What are you planning to do by today?
• Do you have any problems preventing you from accomplishing your goal? (It is the role of the ScrumMaster to remember these impediments.)
Sprint Planning Meeting
At the beginning of the sprint cycle (every 15–30 days), a "Sprint Planning Meeting" is held.
• Select what work is to be done.
• Prepare the Sprint Backlog that detail the time it will take to do that work, with the entire team.
• Identify and communicate how much of the work is likely to be done during the current sprint.
• Eight hour limit.
Sprint Review Meeting
At the end of a sprint cycle, two meetings are held: the "Sprint Review Meeting" and the "Sprint Retrospective"
• Review the work that was completed and not completed.
• Present the completed work to the stakeholders (a.k.a. "the demo").
• Incomplete work cannot be demonstrated.
• Four hour time limit.
Sprint Retrospective
• All team members reflect on the past sprint.
• Make continuous process improvement.
• Two main questions are asked in the sprint retrospective: What went well during the sprint? What could be improved in the next sprint?
• Three hour time limit.
Artifacts
Product backlog
The product backlog is a high-level document for the entire project. It contains backlog items: broad descriptions of all required features, wish-list items, etc. prioritised by business value. It is the "What" that will be built. It is open and editable by anyone and contains rough estimates of both business value and development effort. Those estimates help the Product Owner to gauge the timeline and, to a limited extent, priority. For example, if the "add spellcheck" and "add table support" features have the same business value, the one with the smallest development effort will probably have higher priority, because the ROI is higher.
The product backlog is property of the Product Owner. Business value is set by the Product Owner. Development effort is set by the Team.
Sprint backlog
The sprint backlog is a greatly detailed document containing information about how the team is going to implement the features for the upcoming sprint. Features are broken down into tasks; as a best practice tasks are normally estimated between four and 16 hours of work. With this level of detail the whole team understands exactly what to do, and anyone can potentially pick a task from the list. Tasks on the sprint backlog are never assigned; rather, tasks are signed up for by the team members as needed, according to the set priority and the team member skills.
The sprint backlog is property of the Team. Estimations are set by the Team. Often an according Task Board is used to see and change the state of the tasks of the current sprint, like "to do", "in progress" and "done".
Burn down
The burn down chart is a publicly displayed chart showing remaining work in the sprint backlog. Updated every day, it gives a simple view of the sprint progress. It also provides quick visualizations for reference.
It should not be confused with an earned value chart.
Adaptive project management 
The following are some general practices of Scrum:
• Customers become a part of the development team (i.e. the customer must be genuinely interested in the output.)
• Scrum has frequent intermediate deliveries with working functionality, like all other forms of agile software processes. This enables the customer to get working software earlier and enables the project to change its requirements according to changing needs.
• Frequent risk and mitigation plans are developed by the development team itself—risk mitigation, monitoring and management (risk analysis) occurs at every stage and with commitment.
• Transparency in planning and module development—let everyone know who is accountable for what and by when.
• Frequent stakeholder meetings to monitor progress—balanced dashboard updates (delivery, customer, employee, process, stakeholders)
• There should be an advance warning mechanism, i.e. visibility to potential slippage or deviation ahead of time.
• No problems are swept under the carpet. No one is penalized for recognizing or describing any unforeseen problem.
• Workplaces and working hours must be energized—"Working more hours" does not necessarily mean "producing more output."

Difference between REST and SOAP

The two implementations of Web services architecture are actually completely different; they may accomplish much the same results, but they go about their jobs in entirely separate ways.
With SOAP, each Web service has its own URL, and a request made of that URL is submitted as a message in an XML enclosure. Each request uses an instruction that's part of the namespace of the service, which is itself explained through an XML scheme called Web Services Description Language (WSDL). So a Web client passes a message to the service, using the lexicon outlined by WSDL and enclosed in a SOAP envelope. The service then responds with results that are enclosed in a very symmetrical fashion, so that the queried element and the response tend to match up.
The key benefits of SOAP are that it is transport-agnostic (just because it uses HTTP now doesn't mean it has to in ten or fifteen years' time), and that it's easy to associate a Web service with an appropriate URL. That makes directories of Web services easier to assemble.
REST is actually a bit simpler to explain, especially to someone who hasn't grown too accustomed to SOAP. Unlike SOAP, REST relies entirely on HTTP. But because of that, its request language is already known; there are only four "verbs" in REST which translate directly to the GET, POST, PUT, and DELETE. So the need for WSDL on the request side is completely thwarted.
With REST, the item of data the client requests -- not the Web service itself -- is the target of the URL. For example, the field where a customer's surname would appear in a database may be the URL, whereas in SOAP, the URL refers to the service to which a request for that surname would be placed in an envelope. The server responds with a message in an XML envelope that pairs both the item that was requested and its response, which makes it easier for an auditing application to account for data transactions.
Is REST necessarily an easier way to go? It is if you're developing applications using a new concept called the model view controller scheme. These are the "composite" applications to which Microsoft's marketing literature refers; they involve three separate components which may be programmed in completely different languages, and may be running on separate processors. The model component sets up the data that's the subject of an application, whereas the view component prepares a meaningful relationship of that data for a human reader. This tends to translate well to systems where JavaScript or dynamic language code can do the modeling, and HTML can set up the view. The controller may be a server application that maintains the active state and integrity of the database.
It's an extremely sensible way to think of a network application, and it may be much easier to develop such a system because the three aspects of maintaining it can be delegated to separate development teams. But it almost mandates that the "what" of the application -- the part which the model component is setting up, and the view is preparing to lay out -- have a discrete name, without relying upon some Web service to give it a name later on during the transaction process. That's where the REST model may be a better fit.

The Application, Page and Control lifecycle in ASP.NET v2.0

Application: BeginRequest
Application: PreAuthenticateRequest
Application: AuthenticateRequest
Application: PostAuthenticateRequest
Application: PreAuthorizeRequest
Application: AuthorizeRequest
Application: PostAuthorizeRequest
Application: PreResolveRequestCache
Application: ResolveRequestCache
Application: PostResolveRequestCache
Application: PreMapRequestHandler
Page: Construct
Application: PostMapRequestHandler
Application: PreAcquireRequestState
Application: AcquireRequestState
Application: PostAcquireRequestState
Application: PreRequestHandlerExecute
Page: AddParsedSubObject
Page: CreateControlCollection
Page: AddedControl
Page: AddParsedSubObject
Page: AddedControl
Page: ResolveAdapter
Page: DeterminePostBackMode
Page: PreInit
Control: ResolveAdapter
Control: Init
Control: TrackViewState
Page: Init
Page: TrackViewState
Page: InitComplete
Page: LoadPageStateFromPersistenceMedium
Control: LoadViewState
Page: EnsureChildControls
Page: CreateChildControls
Page: PreLoad
Page: Load
Control: DataBind
Control: Load
Page: EnsureChildControls
Page: LoadComplete
Page: EnsureChildControls
Page: PreRender
Control: EnsureChildControls
Control: PreRender
Page: PreRenderComplete
Page: SaveViewState
Control: SaveViewState
Page: SaveViewState
Control: SaveViewState
Page: SavePageStateToPersistenceMedium
Page: SaveStateComplete
Page: CreateHtmlTextWriter
Page: RenderControl
Page: Render
Page: RenderChildren
Control: RenderControl
Page: VerifyRenderingInServerForm
Page: CreateHtmlTextWriter
Control: Unload
Control: Dispose
Page: Unload
Page: Dispose
Application: PostRequestHandlerExecute
Application: PreReleaseRequestState
Application: ReleaseRequestState
Application: PostReleaseRequestState
Application: PreUpdateRequestCache
Application: UpdateRequestCache
Application: PostUpdateRequestCache
Application: EndRequest
Application: PreSendRequestHeaders
Application: PreSendRequestContent

Monday, August 24, 2009

Visual Studio 2010 and the .NET Framework 4.0

Visual Studio 2010 and the .NET Framework 4.0 mark the next generation of developer tools from Microsoft. Designed to address the latest needs of developers, Visual Studio and the .NET Framework deliver key innovations in the following pillars:
•Democratizing Application Lifecycle Management
Application Lifecycle Management (ALM) crosses many roles within an organization and traditionally not every one of the roles has been an equal player in the process. Visual Studio Team System 2010 continues to build the platform for functional equality and shared commitment across an organization’s ALM process.
• Enabling emerging trends
Every year the industry develops new technologies and new trends. With Visual Studio 2010, Microsoft delivers tooling and framework support for the latest innovations in application architecture, development and deployment.
What's New in ASP.NET 4 and Visual Web Developer 2010:
• ASP.NET Core Services
• Extensible Output Caching
• Auto-Start Web Applications
• Permanently Redirecting a Page
• The Incredible Shrinking Session State
• ASP.NET Web Forms
• Setting Meta Tags with the Page.MetaKeywords and Page.MetaDescription Properties
• Enabling View State for Individual Controls
• Changes to Browser Capabilities
• Routing in ASP.NET 4
• Setting Client IDs
• Persisting Row Selection in Data Controls
• FormView Control Enhancements
• ListView Control Enhancements
• Filtering Data with the QueryExtender Control
• Dynamic Data

• Declarative DynamicDataManager Control Syntax
• Entity Templates
• New Field Templates for URLs and E-mail Addresses
• Creating Links with the DynamicHyperLink Control
• Support for Inheritance in the Data Model
• Support for Many-to-Many Relationships (Entity Framework Only)
• New Attributes to Control Display and Support Enumerations
• Enhanced Support for Filters
• AJAX Functionality in ASP.NET 4
• Client template rendering.
• Instantiating behaviors and controls declaratively.
• Live data binding.
• Support for the observer pattern with JavaScript objects and arrays.
• The AdoNetServiceProxy class for client-side interaction with ADO.NET Data Services.
• The DataView control for data-bound UI in the browser.
• The DataContext and AdoNetDataContext classes for interaction with Web services.
• Refactoring the Microsoft AJAX Framework libraries.
• Visual Web Developer Enhancements
• Improved CSS Compatibility
• HTML and JScript Snippets
• JScript IntelliSense Enhancements
• Web Application Deployment with Visual Studio 2010
• Web packaging
• Web configuration-file transformation
• Database deployment
• One-Click publishing
• Support for MVC-Based Web Applications
ASP.NET MVC helps Web developers build compelling standards-based Web sites that are easy to maintain because it decreases the dependency among application layers by using the Model-View-Controller (MVC) pattern. MVC provides complete control over the page markup. It also improves testability by inherently supporting Test Driven Development (TDD).
• Enhancements to ASP.NET Multi-Targeting
ASP.NET 4 adds new features to the multi-targeting feature to make it easier to work with projects that target earlier versions of the .NET Framework. Multi-targeting was introduced with ASP.NET 3.5 to enable you to use the latest version of Visual Studio without having to upgrade existing Web sites or Web services to the latest version of the .NET Framework.
Download the Visual Studio 2010 and .NET Framework 4.0 Training Kit:
http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=752cb725-969b-4732-a383-ed5740f02e93

Thursday, August 13, 2009

How to consume Web Service in Silverlight Application

To consume the Web Service in Silverlight Application you need to follow the following step:
1. Write a Web Service.

2. Add a referenace of Web Service to a silverlight Application.
3. Create a file name clientaccesspolicy.xml in wwwroot folder of Web Service hosted; with following content:

4. Create a file named crossdomain.xml in ClientBin folder of your Silverlight Web Application; with following content:

Encrypting configuration information of web.config

ASP.NET IIS Registration Tool (Aspnet_regiis.exe) is use for encrypt or decrypt sections of a Web configuration file. ASP.NET will automatically decrypt encrypted configuration elements when the Web.config file is processed.
The following command encrypts the connectionStrings element in the Web.config file for the application SampleApplication. Because no -site option is included, the application is assumed to be from Web site 1 (most commonly Default Web Site in IIS). The encryption is performed using the RsaProtectedConfigurationProvider specified in the machine configuration.
aspnet_regiis -pe "connectionStrings" -app "/SampleApplication" -prov "RsaProtectedConfigurationProvider"
The following command decrypts the connectionStrings element in the Web.config file for the ASP.NET application SampleApplication:
aspnet_regiis -pd "connectionStrings" -app "/SampleApplication"
Link: http://www.beansoftware.com/ASP.NET-Tutorials/Encrypting-Connection-String.aspx

Thursday, August 6, 2009

Isolation Levels in SQL Server 2005

Isolation levels come into play when you need to isolate a resource for a transaction and protect that resource from other transactions. The protection is done by obtaining locks. What locks need to be set and how it has to be established for the transaction is determined by SQL Server referring to the Isolation Level that has been set. Lower Isolation Levels allow multiple users to access the resource simultaneously (concurrency) but they may introduce concurrency related problems such as dirty-reads and data inaccuracy. Higher Isolation Levels eliminate concurrency related problems and increase the data accuracy but they may introduce blocking.
Note that first four Isolation Levels described below are ordered from lowest to highest. The two subsequent levels are new to SQL Server 2005, and are described separately.
Read Uncommitted Isolation Level
This is the lowest level and can be set, so that it provides higher concurrency but introduces all concurrency problems; dirty-reads, Lost updates, Nonrepeatable reads (Inconsistent analysis) and phantom reads. This Isolation Level can be simply tested.
Connection1 opens a transaction and starts updating Employees table.
USE Northwind
BEGIN TRAN
-- update the HireDate from 5/1/1992 to 5/2/1992
UPDATE dbo.Employees
SET HireDate = '5/2/1992'
WHERE EmployeeID = 1
Connection2 tries to read same record.
USE Northwind
SELECT HireDate
FROM dbo.Employees
WHERE EmployeeID = 1
You will see that Connection2 cannot read data because an exclusive lock has been set for the resource by Connection1. The exclusive locks are not compatible with other locks. Though this reduces the concurrency, as you see, it eliminates the data inaccuracy by not allowing seeing uncommitted data for others. Now let’s set the Isolation Level of Connection2 to Read Uncommitted and see.
USE Northwind
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT HireDate
FROM dbo.Employees
WHERE EmployeeID = 1
-- results HireDate as 5/2/1992
As you expected, Connection2 can see the record that is being modified by Connection1 which is an uncommitted record. This is called dirty-reading. You can expect higher level of concurrency by setting the Isolation Level to Read Uncommitted but you may face all concurrency related problems. Imagine the consequences when Connection1 rolls back the transaction but Connection2 makes a decision from the result before the roll back.
Read Committed Isolation Level
This is the default Isolation Level of SQL Server. This eliminates dirty-reads but all other concurrency related problems. You have already seen this. Look at the sample used above. Connection2 could not read data before the Isolation Level was set to Read Uncommitted. That is because it had been set to the default Isolation Level which is Read Committed which in turn disallowed reading uncommitted data. Though it stops dirty-reads, it may introduce others. Let’s take a simple example that shows Lost Updates.
Employee table contains data related to employee. New employee joins and record is made in the table.
USE Northwind
INSERT INTO dbo.Employees
(LastName, FirstName, Title, TitleOfCourtesy, BirthDate, HireDate)
VALUES
('Lewis', 'Jane', 'Sales Representative', 'Ms.', '03/04/1979', '06/23/2007')
This table contains a column called Notes that describes the employee’s education background. Data entry operators fill this column by looking at her/his file. Assume that the update code has been written as below. Note that no Isolation Level has been set, means default is set.
IF OBJECT_ID(N'dbo.UpdateNotes', N'P') IS NOT NULL
BEGIN
DROP PROC dbo.UpdateNotes
END
GO
CREATE PROCEDURE dbo.UpdateNotes @EmployeeID int, @Notes ntext
AS
BEGIN
DECLARE @IsUpdated bit
BEGIN TRAN
SELECT @IsUpdated = CASE WHEN Notes IS NULL THEN 0 ELSE 1 END
FROM dbo.Employees
WHERE EmployeeID = @EmployeeID -- new record
-- The below statement added to hold the transaction for 5 seconds
-- Consider it is as a different process that do something else.
WAITFOR DELAY '00:00:5'
IF (@IsUpdated = 0)
BEGIN
UPDATE dbo.Employees
SET Notes = @Notes
WHERE EmployeeID = @EmployeeID
END
ELSE
BEGIN
ROLLBACK TRAN
RAISERROR ('Note has been alreasy updated!', 16, 1)
RETURN
END
COMMIT TRAN
END
Operator1 makes Connection1 and executes the following query.
EXEC dbo.UpdateNotes 15, 'Jane has a BA degree in English from the University of Washington.'
Within few seconds (in this case, right after Operator1 started) Operator2 makes Connection2 and executes the same with a different note, before completing the Operator1’s process.
EXEC dbo.UpdateNotes 15, 'Jane holds a BA degree in English.'
If you query the record after both processes, you will see that note that was entered by the Operator2 has been set for the record. Operator1 made the update and no error messages were returned to it, but it has lost its update. This could be avoided if the record was locked and held as soon as it was identified as a not updated record. But obtaining and holding a lock is not possible with Read Committed Isolation Level. Because of this, concurrency related problems such as Lost Updates, Nonrepeatable reads and Phantom reads can happen with this Isolation Level.
Repeatable Read Isolation Level
This Isolation Level addresses all concurrency related problems except Phantom reads. Unlike Read Committed, it does not release the shared lock once the record is read. It obtains the shared lock for reading and keeps till the transaction is over. This stops other transactions accessing the resource, avoiding Lost Updates and Nonrepeatable reads. Change the Isolation Level of the stored procedure we used for Read Committed sample.
IF OBJECT_ID(N'dbo.UpdateNotes', N'P') IS NOT NULL
BEGIN
DROP PROC dbo.UpdateNotes
END
GO
CREATE PROCEDURE dbo.UpdateNotes @EmployeeID int, @Notes ntext
AS
BEGIN
DECLARE @IsUpdated bit
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRAN
SELECT @IsUpdated = CASE WHEN Notes IS NULL THEN 0 ELSE 1 END
FROM dbo.Employees
WHERE EmployeeID = @EmployeeID -- new record
Now make two connections and execute below queries just as you did with Read Committed sample. Make sure you set the Note column value back to NULL before executing them.
With Connection1;
EXEC dbo.UpdateNotes 15, 'Jane has a BA degree in English from the University of Washington.'
With Connection2;
EXEC dbo.UpdateNotes 15, 'Jane holds a BA degree in English.'
Once you execute the code with Connection2, SQL Server will throw 1205 error and Connection2 will be a deadlock victim. This is because, Connection1 obtain and hold the lock on the resource until the transaction completes, stopping accessing the resource by others, avoiding Lost Updates. Note that setting DEADLOCK_PRIORITY to HIGH, you can choose the deadlock victim.
Since the lock is held until the transaction completes, it avoids Nonrepeatable Reads too. See the code below.
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRAN
SELECT Notes
FROM dbo.Employees
WHERE EmployeeID = 10
It reads a record from the Employees table. The set Isolation Level guarantees the same result for the query anywhere in the transaction because it holds the lock without releasing, avoiding modification from others. It guarantees consistency of the information and no Nonrepeatable reads.
Now let’s take another simple example. In this case, we add one new table called Allowances and one new column to Employees table called IsBirthdayAllowanceGiven. The code for changes are as below;
USE Northwind
GO
-- table holds allowances
CREATE TABLE Allowances (EmployeeID int, MonthAndYear datetime, Allowance money)
GO
-- additional column that tells whether the birthday allowance is given or not
ALTER TABLE dbo.Employees
ADD IsBirthdayAllowanceGiven bit DEFAULT(0) NOT NULL
GO
Assume that company pays an additional allowance for employees whose birth date fall on current month. The below stored procedure inserts allowances for employees whose birth date fall on current month and update employees record. Note that WAITFOR DELAY has been added hold the transaction for few seconds in order to see the problem related to it. And no Isolation Level has been set, default applies.
IF OBJECT_ID(N'dbo.AddBirthdayAllowance', N'P') IS NOT NULL
BEGIN
DROP PROC dbo.AddBirthdayAllowance
END
GO
CREATE PROC dbo.AddBirthdayAllowance
AS
BEGIN
BEGIN TRAN
-- inserts records to allowances table
INSERT INTO Allowances
(EmployeeID, MonthAndYear, Allowance)
SELECT EmployeeID, getdate(), 100.00
FROM dbo.Employees
WHERE IsBirthdayAllowanceGiven = 0
AND MONTH(BirthDate) = MONTH(getdate())
-- hold the transaction for 5 seconds
-- Consider this is as some other process that takes 5 seconds
WAITFOR DELAY '00:00:05'
-- update IsBirthdayAllowanceGiven column in Employees table
UPDATE dbo.Employees
SET IsBirthdayAllowanceGiven = 1
WHERE IsBirthdayAllowanceGiven = 0
AND MONTH(BirthDate) = MONTH(getdate())
COMMIT TRAN
END
Before running any queries, make sure at least one employee’s birth date falls on current month. Now open a new connection (let’s name it as Connection1) and run the stored procedure. In my Northwind database, I have one record that stratifies the criteria; EmployeeId 6: Michael Suyama.
USE Northwind
GO
EXEC dbo.AddBirthdayAllowance
Immediately, open Connection2 and insert a new employee whose birth date falls into current month.
USE Northwind
GO
INSERT INTO dbo.Employees
(LastName, FirstName, Title, TitleOfCourtesy, BirthDate, HireDate)
VALUES
('Creg', 'Alan', 'Sales Representative', 'Ms.', '07/13/1980', '07/20/2007')
Go back to Connection2. Once the transaction completed, query the Allowances table and see. You will see a one record that is generated for Michael. Then open the Employees table and see that how many records have been updated. It has updated two, not only Michael but Alan. Note that no record has been inserted to the Allowances table for Alan. In this case, the new record is considered as a Phantom record and read of the new record called as Phantom Read. This cannot be avoided with default Isolation Level that is Read Committed. Change the stored procedure and set the Isolation Level as Repeatable Read.
IF OBJECT_ID(N'dbo.AddBirthdayAllowance', N'P') IS NOT NULL
BEGIN
DROP PROC dbo.AddBirthdayAllowance
END
GO
CREATE PROC dbo.AddBirthdayAllowance
AS
BEGIN
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRAN
-- inserts records to allowances table
INSERT INTO Allowances
(EmployeeID, MonthAndYear, Allowance)
SELECT EmployeeID, getdate(), 100.00
FROM dbo.Employees
WHERE IsBirthdayAllowanceGiven = 0
AND MONTH(BirthDate) = MONTH(getdate())
Now bring the Employees table to original state.
UPDATE dbo.Employees
SET IsBirthdayAllowanceGiven = 0
DELETE dbo.Employees
WHERE FirstName = 'Alan'
DELETE dbo.Allowances
Open two connections again and try the same. Check the result. Still the Phantom Reads problem exists. In order to avoid this problem, you need to use highest Isolation Level that is Serializable.
Serializable Isolation Level
This is the highest Isolation Level and it avoids all the concurrency related problems. The behavior of this level is just like the Repeatable Read with one additional feature. It obtains key range locks based on the filters that have been used. It locks not only current records that stratify the filter but new records fall into same filter. Change the stored procedure we used for above sample and set the Isolation Level as Serializable.
IF OBJECT_ID(N'dbo.AddBirthdayAllowance', N'P') IS NOT NULL
BEGIN
DROP PROC dbo.AddBirthdayAllowance
END
GO
CREATE PROC dbo.AddBirthdayAllowance
AS
BEGIN
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
-- inserts records to allowances table
INSERT INTO Allowances
(EmployeeID, MonthAndYear, Allowance)
SELECT EmployeeID, getdate(), 100.00
FROM dbo.Employees
WHERE IsBirthdayAllowanceGiven = 0
AND MONTH(BirthDate) = MONTH(getdate())
Run the clean up code again to bring the Employees table to the original state.
Now test the stored procedure and INSERT statement with two connections. You will notice that INSERT operation is blocked until Connection1 completes the transaction, avoiding Phantom Reads.
Run the clean up code again and drop the new table Allowances and added column IsBirthdayAllowanceGiven in the Employees table.
Whenever we set the Isolation Level to a transaction, SQL Server makes sure that the transaction is not disturbed by other transactions. This is called concurrency control. All the Isolation Levels we discussed so far come under a control called Pessimistic Control. The Pessimistic control, SQL Server locks the resource until user performs the action she/he needs and then release for others. The other concurrency control is Optimistic Control. Under Optimistic Control, SQL Server does not hold locks but once read, check for inconsistency for next read. The two newly introduced Isolation Levels with SQL Server 2005 are Snapshot and Read Committed Snapshot. These two Isolation Levels provide Optimistic Control and they use Row Versioning.
Snapshot Isolation Level
The Snapshot Isolation Level works with Row Versioning technology. Whenever the transaction requires a modification for a record, SQL Server first stores the consistence version of the record in the tempdb. If another transaction that runs under Snapshot Isolation Level requires the same record, it can be taken from the version store. This Isolation Level prevents all concurrency related problems just like Serializable Isolation Level, in addition to that it allows multiple updates for same resource by different transactions concurrently.
Since there is a performance impact with Snapshot Isolation Level it has been turned off by default. The impact is explained below with the sample. You can enable it by altering the database.
ALTER DATABASE Northwind SET ALLOW_SNAPSHOT_ISOLATION ON
Let’s look at a simple sample. Make sure you have enabled Snapshot Isolation Level in the database before running below query. Open a new connection (Connection1) and execute query below;
USE Northwind
BEGIN TRAN
-- update the HireDate from 5/1/1992 to 5/2/1992
UPDATE dbo.Employees
SET HireDate = '5/2/1992'
WHERE EmployeeID = 1
Now open the second connection (Connection2) and try to retrieve the same record.
SELECT *
FROM dbo.Employees
WHERE EmployeeID = 1
As you have seen with examples discussed under other levels, the record cannot be retrieved. Since we have enabled Snapshot Isolation Level in the database, SQL Server stores version of the record. Use below dynamic management view for retrieving versions stored in the store.
SELECT * FROM sys.dm_tran_version_store;
You will see one record in the store. Now set the Isolation Level of the Connection2 as Snapshot and try to retrieve the record.
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
BEGIN TRAN
SELECT *
FROM dbo.Employees
WHERE EmployeeID = 1
This returns record from the store that was the last consistence version of the record. Note that HireDate of the employee is 05/01/1992 not 05/02/1992. Now go back to the Connection1 and commit the transaction.
COMMIT TRAN
Again open the Connection2 and execute the query. Note that even though the Connection1 has committed the change, Connection2 still gets the older record. This is because it was the consistence record in the version store when the Connection2 started the transaction and the same version is read during the transaction. SQL Server keeps this version of the record until no reference for it. If another transaction starts changing same record, another version will be stored and goes on; results longer link list in the version store. Maintaining longer link list and traversing through list will impact the performance. Committing the transaction in Connection2 will remove the reference for the first version and the first version in the store will be removed from separate clean-up process.
There is another great feature with Snapshot Isolation Level. It is Conflict Detection. One transaction reads a record from the version store and later tries to update the record. Another transaction updates the same record before previous transaction’s update. This conflict detects by the SQL Server and aborts the previous transaction.
Open a connection (Connection1) and run the below query. The update statement causes to add the current consistence version to the version store.
USE Northwind
BEGIN TRAN
-- update the HireDate from 5/1/1992 to 5/2/1992
UPDATE dbo.Employees
SET HireDate = '5/2/1992'
WHERE EmployeeID = 1
Open the second connection (Connection2) and read the same record. Note the Isolation Level.
USE Northwind
GO
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
BEGIN TRAN
SELECT *
FROM dbo.Employees
WHERE EmployeeID = 1
Go back to Connection1 and commit the transaction.
COMMIT TRAN
Go back to Connection2 and try to update the record. Note that the current transaction still runs. Whenever you execute the UPDATE statement, SQL Server detects the modification that has been done by Connection1 in between read and write, it throws an error.
UPDATE dbo.Employees
SET HireDate = '5/3/1992'
WHERE EmployeeID = 1
Snapshot isolation transaction aborted due to update conflict. You cannot use snapshot isolation to access table 'dbo.Employees' directly or indirectly in database 'Northwind' to update, delete, or insert the row that has been modified or deleted by another transaction. Retry the transaction or change the isolation level for the update/delete statement.
Once the conflict is detected, it terminates the transaction in Connection2. Though this Isolation Level has some great advantageous, this level is not recommended for a database that has many updates. This is suitable for database that is mainly used for read data with occasional updates.
Read Committed Snapshot Isolation Level
This is the new implementation of the Read Committed Isolation Level. It has to be set not at session/connection level but database level. The only different between Read Committed and Read Committed Snapshot is, Read Committed Snapshot is Optimistic whereas Read Committed is Pessimistic. The Read Committed Snapshot differs from Snapshot in two ways; Unlike Snapshot, it always returns latest consistence version and no conflict detection.
Let’s test this out. First, enable the Isolation Level.
ALTER DATABASE Northwind SET READ_COMMITTED_SNAPSHOT ON
Now open a new connection (Connection1) and run the below query.
USE Northwind
BEGIN TRAN
-- update the HireDate from 5/1/1992 to 5/2/1992
UPDATE dbo.Employees
SET HireDate = '5/2/1992'
WHERE EmployeeID = 1
This makes a last consistence version in the version store. Now open the second connection (Connection2) and try to retrieve the record.
USE Northwind
GO
BEGIN TRAN
SELECT *
FROM dbo.Employees
WHERE EmployeeID = 1
You get a record from the version store. The value for the HireDate will be the last consistence value that is 05/01/1992. Go back to Connection1 and commit the transaction.
COMMIT TRAN
In Connection1, execute the SELECT statement again. Unlike Snapshot the latest consistence is returned that has the HireDate as 05/02/1992. Commit the Connection2 transaction too.
Since the maintaining old versions are not necessary with this level, there will be no impact for performance like Snapshot but all the concurrency related problems except dirty reads can happen.

Finally, let’s summarize. The below table depicts importance points of each level.


 

Session/State Management

State Management
No web application framework, no matter how advance, can change that HTTP is a stateless protocol. This stateless protocol makes our live easy and inherently we should also forget our users, but unfortunately we cannot. There is a lot at stake if we forget our user.
ASP.Net Framework provides us features by which we can maintain states of your beloved users. We do it by the 2 options provided to us. They are Client Side State Management and Server Side State Management.
There are many techniques that we can apply to management states both at Client or Server sides.
Client Side State Management
• Cookies
• View State
• Hidden Fields
• Control State
• Query String
Server Side State Management
• Session State
• Application State

• Profile Properties
In this article we will be discussing Session based State Management and how to do InProc, State and SQL Server State Management.
Session
Sessions are stored server side and are unique to every user. Every user accessing the application is given a unique Session Id when the application is first accessed by him or her. For every other request that the user posts to the server, the Session Id is shared by the user and server can recognize who the user is.
In its simplest form, session can be created as
Session[“mySession”] = “some data”;
And can be retrieved as string data = Session[“mySession”].ToString();
Modes of Sessions
In-Process also known as In-Proc
Out-of-Process also known as State Server,
SQL Server
In-Process

This model is the fastest, most common and the default behavior of Sessions. Session information is stored inside IIS (Internet Information Service). Please make a note that what ever model you use there will be no change in assigning or retrieving information from sessions. However there few details that you need to look for when working with In-Process Sessions Model:
1. If you change Web.config, Global.asax, application restarts itself and as a result session information is lost.
2. Your information also lost if you change your code files in your App_Code folder.
3. If your IIS restarts, this results in the same.
4. Handy events Session_Start and Session_End in Global.asax file (effective only in In-Process).
5. Web.config file timeout attribute
To configure your application for in-proc session management, the most common and basic set up of your web.config file is in the following manner
Timeout attribute takes numeric values and the unit is Minutes. In this case user’s session will expire in 20 minutes time.
Out-Of-Process
Out-Of-Process State Sessions are stored in a process which runs as a Windows Service called ASP.NET State Service. This service is not running by default and of the many, 2 options to run this service are:
1. Click Start>Run. Type cmd to open Command Prompt. Type net starts aspnet_state. If the service is already not running, this command will start the service.
2. Right click My Computer and select Manage. Expand Services and Application, click Services. In the right panel look for ASP.NET State Service and start by right clicking it and selecting Properties option from the context menu. Select Start up Type as Automatic and start the service by clicking the Start Button.
Each model has its benefits and drawbacks. You as a designer of the application have to take this decision of what model to adopt. You have to decide between speed, reliability and expansion.
There are other things that you need to know about this model.
1. Using this model, it is not necessary that the service in running on the same server on which your application is running/deployed. Session information can be stored on any server/physical machine on which this service is running and is accessible to your Application Server.
2. timeout attribute applies to this model as well.
3. Session_End Event in Global.asax file is not fired using this model.
4. Objects must be Seriablizable before they can be stored in Session.
To configure your application for out-proc session management, the most common and basic set up of your web.config file is in the following manner
stateConnectionString attribute holds the server name where you want to maintain out session information. 127.0.0.1 is your local host, the current machine on which your application is running and 42424 is the port on which State Service listens.
SQL Server
Session information can also be stored in SQL Server. To configure this most reliable option to maintain session information, perform the following step.
Open Visual Studio Command Prompt and run the query aspnet_regsql –S [localhost] –U [user id] –P [password] –ssadd –sstype p
This command will configure Sql Server based Session State Management on your machine. The configuration is not yet completed. There are other things that you need to know about this model.
1. Sql Server based session management is the slowest but reliable of all.
2. Sql Server based session management is not effected by web farming or web gardening issues.
3. You can explore other command line arguments by running aspnet_regsql -?
4. The table created after running the above command is ASPStateTempSessions.
To configuration your application for Sql Server session management, the most common and basic set up of your web.config file is in the following manner
sqlConnectionString attribute holds connection string of the location of your database and user id and password to connect to it. There are other multiple options that can be used with aspnet_regsql statement.

1. Understanding Session State Modes
Storage location
InProc - session kept as live objects in web server (aspnet_wp.exe)
StateServer - session serialized and stored in memory in a separate process aspnet_state.exe). State Server can run on another machine
SQLServer - session serialized and stored in SQL server
Performance
InProc - Fastest, but the more session data, the more memory is consumed on the web server, and that can affect performance.
StateServer - When storing data of basic types (e.g. string, integer, etc), in one test environment it's 15% slower than InProc. However, the cost of serialization/deserialization can affect performance if you're storing lots of objects. You have to do performance testing for your own scenario.
SQLServer - When storing data of basic types (e.g. string, integer, etc), in one test environment it's 25% slower than InProc. Same warning about serialization as in StateServer.
Performance tips for Out-of-Proc (OOP) modes
If you're using OOP modes (State Server or SQL Server), one of your major cost is the serialization/deserialization of objects in your session state. ASP.NET performs the serialization/deserialization of certain "basic" types using an optimized internal method. ("Basic" types include numeric types of all sizes (e.g. Int, Byte, Decimal, String, DateTime, TimeSpan, GUID, IntPtr and UIntPtr, etc)
If you have a session variable (e.g. an ArrayList object) that is not one of the "basic" types, ASP.NET will serialize/deserialize it using the BinaryFormatter, which is relatively slower.
So for performance sake it is better to store all session state data using one of the "basic" types listed above. For example, if you want to store two things, Name and Address, in session state, you can either (a) store them using two String session variables, or (b) create a class with two String members, and store that class object in a session variable. Performance wise, you should go with option (a).
Robustness
InProc
- Session state will be lost if the worker process (aspnet_wp.exe) recycles, or if the AppDomain restarts. It's because session state is stored in the memory space of an AppDomain. The restart can be caused by the modification of certain config files such as web.config and machine.config, or any change in the \bin directory (such as new DLL after you've recompiled the application using VS) For details, see KB324772. In v1, there is also a bug that will cause worker process to restart. It's fixed in SP2 and in v1.1.
If you're using IIS 6.0, you may want to go to IIS Manager, go to Application Pools/DefaultAppPool, and see if any of the parameters on the Recycling and Performance tabs are causing the IIS worker process (w3svc.exe) to shutdown.
StateServer - Solve the session state loss problem in InProc mode. Allows a webfarm to store session on a central server. Single point of failure at the State Server.
SQLServer - Similar to StateServer. Moreover, session state data can survive a SQL server restart, and you can also take advantage of SQL server failover cluster.
Caveats
InProc
- It won't work in web garden mode, because in that mode multiple aspnet_wp.exe will be running on the same machine. Switch to StateServer or SQLServer when using web garden. Also Session_End event is supported only in InProc mode.
StateServer
- In a web farm, make sure you have the same in all your web servers.
- Also, make sure your objects are serializable. .
- For session state to be maintained across different web servers in the web farm, the Application Path of the website (For example \LM\W3SVC\2) in the IIS Metabase should be identical (case sensitive) in all the web servers in the web farm.
SQLServer
- In v1, there is a bug so that if you specify integrated security in the connection string (e.g. "trusted_connection=true”, or "integrated security=sspi"), it won't work if you also turn on impersonation in asp.net.
- Also, make sure your objects are serializable. Otherwise, your request will hang. The SQLServer mode hanging problem was fixed in v1.1. The QFE fix for KB 324479 also contains the fix for this problem. The problem will be fixed in v1 SP3 too.
- For session state to be maintained across different web servers in the web farm, the Application Path of the website (For example \LM\W3SVC\2) in the IIS Metabase should be identical (case sensitive) in all the web servers in the web farm. See KB 325056 for details

Wednesday, August 5, 2009

Function Point Analysis

Function Point Analysis is a structured technique of problem solving. It is a method to break systems into smaller components, so they can be better understood and analyzed.
Function points are a unit measure for software much like an hour is to measuring time, miles are to measuring distance or Celsius is to measuring temperature. Function Points are an ordinal measure much like other measures such as kilometers, Fahrenheit, hours, so on and so forth.
In the world of Function Point Analysis, systems are divided into five large classes and general system characteristics. The first three classes or components are External Inputs, External Outputs and External Inquires each of these components transacts against files therefore they are called transactions. The next two Internal Logical Files and External Interface Files are where data is stored that is combined to form logical information. The general system characteristics assess the general functionality of the system.
Brief History
Function Point Analysis was developed first by Allan J. Albrecht in the mid 1970s. It was an attempt to overcome difficulties associated with lines of code as a measure of software size, and to assist in developing a mechanism to predict effort associated with software development. The method was first published in 1979, then later in 1983. In 1984 Albrecht refined the method and since 1986, when the International Function Point User Group (IFPUG) was set up, several versions of the Function Point Counting Practices Manual have been published by IFPUG. The current version of the IFPUG Manual is 4.1. A full function point training manual can be downloaded from this website.
Objectives of Function Point Analysis
Frequently the term end user or user is used without specifying what is meant. In this case, the user is a sophisticated user. Someone that would understand the system from a functional perspective --- more than likely someone that would provide requirements or does acceptance testing.
Since Function Points measures systems from a functional perspective they are independent of technology. Regardless of language, development method, or hardware platform used, the number of function points for a system will remain constant. The only variable is the amount of effort needed to deliver a given set of function points; therefore, Function Point Analysis can be used to determine whether a tool, an environment, a language is more productive compared with others within an organization or among organizations. This is a critical point and one of the greatest values of Function Point Analysis.
Function Point Analysis can provide a mechanism to track and monitor scope creep. Function Point Counts at the end of requirements, analysis, design, code, testing and implementation can be compared. The function point count at the end of requirements and/or designs can be compared to function points actually delivered. If the project has grown, there has been scope creep. The amount of growth is an indication of how well requirements were gathered by and/or communicated to the project team. If the amount of growth of projects declines over time it is a natural assumption that communication with the user has improved.
Characteristic of Quality Function Point Analysis
Function Point Analysis should be performed by trained and experienced personnel. If Function Point Analysis is conducted by untrained personnel, it is reasonable to assume the analysis will done incorrectly. The personnel counting function points should utilize the most current version of the Function Point Counting Practices Manual,
Current application documentation should be utilized to complete a function point count. For example, screen formats, report layouts, listing of interfaces with other systems and between systems, logical and/or preliminary physical data models will all assist in Function Points Analysis.
The task of counting function points should be included as part of the overall project plan. That is, counting function points should be scheduled and planned. The first function point count should be developed to provide sizing used for estimating.
The Five Major Components
Since it is common for computer systems to interact with other computer systems, a boundary must be drawn around each system to be measured prior to classifying components. This boundary must be drawn according to the user’s point of view. In short, the boundary indicates the border between the project or application being measured and the external applications or user domain. Once the border has been established, components can be classified, ranked and tallied.

External Inputs (EI) - is an elementary process in which data crosses the boundary from outside to inside. This data may come from a data input screen or another application. The data may be used to maintain one or more internal logical files. The data can be either control information or business information. If the data is control information it does not have to update an internal logical file. The graphic represents a simple EI that updates 2 ILF's (FTR's).






External Outputs (EO) - an elementary process in which derived data passes across the boundary from inside to outside. Additionally, an EO may update an ILF. The data creates reports or output files sent to other applications. These reports and files are created from one or more internal logical files and external interface file. The following graphic represents on EO with 2 FTR's there is derived information (green) that has been derived from the ILF's







External Inquiry (EQ) - an elementary process with both input and output components that result in data retrieval from one or more internal logical files and external interface files. The input process does not update any Internal Logical Files, and the output side does not contain derived data. The graphic below represents an EQ with two ILF's and no derived data.







Internal Logical Files (ILF’s) - a user identifiable group of logically related data that resides entirely within the applications boundary and is maintained through external inputs.
External Interface Files (EIF’s) - a user identifiable group of logically related data that is used for reference purposes only. The data resides entirely outside the application and is maintained by another application. The external interface file is an internal logical file for another application.



After the components have been classified as one of the five major components (EI’s, EO’s, EQ’s, ILF’s or EIF’s), a ranking of low, average or high is assigned. For transactions (EI’s, EO’s, EQ’s) the ranking is based upon the number of files updated or referenced (FTR’s) and the number of data element types (DET’s). For both ILF’s and EIF’s files the ranking is based upon record element types (RET’s) and data element types (DET’s). A record element type is a user recognizable subgroup of data elements within an ILF or EIF. A data element type is a unique user recognizable, non recursive, field.
Each of the following tables assists in the ranking process (the numerical rating is in parentheses). For example, an EI that references or updates 2 File Types Referenced (FTR’s) and has 7 data elements would be assigned a ranking of average and associated rating of 4. Where FTR’s are the combined number of Internal Logical Files (ILF’s) referenced or updated and External Interface Files referenced.
EI Table





Shared EO and EQ Table

Values for transactions


Like all components, EQ’s are rated and scored. Basically, an EQ is rated (Low, Average or High) like an EO, but assigned a value like and EI. The rating is based upon the total number of unique (combined unique input and out sides) data elements (DET’s) and the file types referenced (FTR’s) (combined unique input and output sides). If the same FTR is used on both the input and output side, then it is counted only one time. If the same DET is used on both the input and output side, then it is only counted one time.
For both ILF’s and EIF’s the number of record element types and the number of data elements types are used to determine a ranking of low, average or high. A Record Element Type is a user recognizable subgroup of data elements within an ILF or EIF. A Data Element Type (DET) is a unique user recognizable, non recursive field on an ILF or EIF.











The counts for each level of complexity for each type of component can be entered into a table such as the following one. Each count is multiplied by the numerical rating shown to determine the rated value. The rated values on each row are summed across the table, giving a total value for each type of component. These totals are then summed across the table, giving a total value for each type of component. These totals are then summoned down to arrive at the Total Number of Unadjusted Function Points.













The value adjustment factor (VAF) is based on 14 general system characteristics (GSC's) that rate the general functionality of the application being counted. Each characteristic has associated descriptions that help determine the degrees of influence of the characteristics. The degrees of influence range on a scale of zero to five, from no influence to strong influence. The IFPUG Counting Practices Manual provides detailed evaluation criteria for each of the GSC'S, the table below is intended to provide an overview of each GSC.
Once all the 14 GSC’s have been answered, they should be tabulated using the IFPUG Value Adjustment Equation (VAF) --
14 where: Ci = degree of influence for each General System Characteristic
VAF = 0.65 + [(Ci) / 100] .i = is from 1 to 14 representing each GSC.
i =1 Ã¥ = is summation of all 14 GSC’s.
The final Function Point Count is obtained by multiplying the VAF times the Unadjusted Function Point (UAF).
FP = UAF * VAF
Summary of benefits of Function Point Analysis
Function Points can be used to size software applications accurately. Sizing is an important component in determining productivity (outputs/inputs).
They can be counted by different people, at different times, to obtain the same measure within a reasonable margin of error.
Function Points are easily understood by the non technical user. This helps communicate sizing information to a user or customer.
Function Points can be used to determine whether a tool, a language, an environment, is more productive when compared with others.
For a more complete list of uses and benefits of FP please see the online article on Using Function Points.
Conclusions
Accurately predicting the size of software has plagued the software industry for over 45 years. Function Points are becoming widely accepted as the standard metric for measuring software size. Now that Function Points have made adequate sizing possible, it can now be anticipated that the overall rate of progress in software productivity and software quality will improve. Understanding software size is the key to understanding both productivity and quality. Without a reliable sizing metric relative changes in productivity (Function Points per Work Month) or relative changes in quality (Defects per Function Point) can not be calculated. If relative changes in productivity and quality can be calculated and plotted over time, then focus can be put upon an organizations strengths and weaknesses. Most important, any attempt to correct weaknesses can be measured for effectiveness.