94
Application Architecture for .NET: Designing Applications and Services This guide is for you if you are an architect or developer lead or need to: Determine how an application will be split into components, Choose what technologies will be used in a transactional line of business application or service, Design what management and security policies should be applied, and Decide how the application will be deployed. This guide applies to transactional or OLTP applications that follow a layered design and can be distributed across many physical tiers by using the following technologies: ASP.NET, Web Services, Enterprise Services (COM+), Remoting, ADO.NET, and SQL Server. Some design principles presented in this guide may be useful on other similar scenarios. Designing distributed applications is no simple task. Many decisions need to be taken at the architecture, design, and implementation levels. These decisions will have an impact on the "abilities" of the application–security, scalability, availability, and maintainability, to name a few–and will have an impact on the architecture, design and implementation of the target infrastructure. This guide will help you understand the choices that arise when designing the layers of a distributed application, and will present these choices as a set of layers of components that you can use to model your application from. Figure 1 shows the logical component layers that this document uses to structure its guidance. These layers are explained, for the most part, in Chapter 2.

Application Architecture For

Embed Size (px)

Citation preview

Page 1: Application Architecture For

Application Architecture for .NET: Designing Applications

and ServicesThis guide is for you if you are an architect or developer lead or need to:

Determine how an application will be split into components,

Choose what technologies will be used in a transactional line of business application or

service,

Design what management and security policies should be applied, and

Decide how the application will be deployed.

This guide applies to transactional or OLTP applications that follow a layered design and can be

distributed across many physical tiers by using the following technologies: ASP.NET, Web Services,

Enterprise Services (COM+), Remoting, ADO.NET, and SQL Server. Some design principles presented

in this guide may be useful on other similar scenarios.

Designing distributed applications is no simple task. Many decisions need to be taken at the

architecture, design, and implementation levels. These decisions will have an impact on the

"abilities" of the application–security, scalability, availability, and maintainability, to name a few–and

will have an impact on the architecture, design and implementation of the target infrastructure. This

guide will help you understand the choices that arise when designing the layers of a distributed

application, and will present these choices as a set of layers of components that you can use to

model your application from. Figure 1 shows the logical component layers that this document uses

to structure its guidance. These layers are explained, for the most part, in Chapter 2.

Page 2: Application Architecture For

1. User interface (UI) components. Most solutions need to provide a way for users to

interact with the application. In the retail application example, a Web site lets customers

view products and submit orders, and an application based on the Microsoft Windows®

operating system lets sales representatives enter order data for customers who have

telephoned the company. User interfaces are implemented using Windows Forms, Microsoft

ASP.NET pages, controls, or any other technology you use to render and format data for

users and to acquire and validate data coming in from them.

2. User process components. In many cases, a user interaction with the system follows a

predictable process. For example, in the retail application you could implement a procedure

for viewing product data that has the user select a category from a list of available product

categories and then select an individual product in the chosen category to view its details.

Similarly, when the user makes a purchase, the interaction follows a predictable process of

gathering data from the user, in which the user first supplies details of the products to be

purchased, then provides payment details, and then enters delivery details. To help

synchronize and orchestrate these user interactions, it can be useful to drive the process

using separate user process components. This way the process flow and state management

logic is not hard-coded in the user interface elements themselves, and the same basic user

interaction "engine" can be reused by multiple user interfaces.

3. Business workflows. After the required data is collected by a user process, the data can

be used to perform a business process. For example, after the product, payment, and

delivery details are submitted to the retail application, the process of taking payment and

arranging delivery can begin. Many business processes involve multiple steps that must be

performed in the correct order and orchestrated. For example, the retail system would need

Page 3: Application Architecture For

to calculate the total value of the order, validate the credit card details, process the credit

card payment, and arrange delivery of the goods. This process could take an indeterminate

amount of time to complete, so the required tasks and the data required to perform them

would have to be managed. Business workflows define and coordinate long-running, multi-

step business processes, and they can be implemented using business process

management tools such as BizTalk Server Orchestration.

4. Business components. Regardless of whether a business process consists of a single step

or an orchestrated workflow, your application will probably require components that

implement business rules and perform business tasks. For example, in the retail application,

you would need to implement the functionality that calculates the total price of the goods

ordered and adds the appropriate delivery charge. Business components implement the

business logic of the application.

5. Service agents. When a business component needs to use functionality provided in an

external service, you may need to provide some code to manage the semantics of

communicating with that particular service. For example, the business components of the

retail application described earlier could use a service agent to manage communication with

the credit card authorization service, and use a second service agent to handle

conversations with the courier service. Service agents isolate the idiosyncrasies of calling

diverse services from your application, and can provide additional services, such as basic

mapping between the format of the data exposed by the service and the format your

application requires.

6. Service interfaces. To expose business logic as a service, you must create service

interfaces that support the communication contracts (message-based communication,

formats, protocols, security, exceptions, and so on) its different consumers require. For

example, the credit card authorization service must expose a service interface that

describes the functionality offered by the service and the required communication

semantics for calling it. Service interfaces are sometimes referred to as business facades.

7. Data access logic components. Most applications and services will need to access a data

store at some point during a business process. For example, the retail application needs to

retrieve product data from a database to display product details to the user, and it needs to

insert order details into the database when a user places an order. It makes sense to

abstract the logic necessary to access data in a separate layer of data access logic

components. Doing so centralizes data access functionality and makes it easier to configure

and maintain.

8. Business entity components: Most applications require data to be passed between

components. For example, in the retail application a list of products must be passed from

the data access logic components to the user interface components so that the product list

can be displayed to the users. The data is used to represent real-world business entities,

such as products or orders. The business entities that are used internally in the application

are usually data structures, such as DataSets, DataReaders, or Extensible Markup Language

(XML) streams, but they can also be implemented using custom object-oriented classes that

represent the real-world entities your application has to work with, such as a product or an

order.

9. Components for security, operational management, and communication: Your

application will probably also use components to perform exception management, to

authorize users to perform certain tasks, and to communicate with other services and

applications. These components are discussed in detail in Chapter 3,

Page 4: Application Architecture For

ASP.NET is a powerful platform for building Web applications, that provides a tremendous amount of flexibility and power for building just about any kind of Web application. Most people are familiar only with the high level frameworks like WebForms and WebServices which sit at the very top level of the ASP.NET hierarchy. In this article I’ll describe the lower level aspects of ASP.NET and explain how requests move from Web Server to the ASP.NET runtime and then through the ASP.NET Http Pipeline to process requests.  To me understanding the innards of a platform always provides certain satisfaction and level of comfort, as well as insight that helps to write better applications. Knowing what tools are available and how they fit together as part of the whole complex framework makes it easier to find the best solution to a problem and more importantly helps in troubleshooting and debugging of problems when they occur. The goal of this article is to look at ASP.NET from the System level and help understand how requests flow into the ASP.NET processing pipeline. As such we’ll look at the core engine and how Web requests end up there. Much of this information is not something that you need to know in your daily work, but it’s good to understand how the ASP.NET architecture routes request into your application code that usually sits at a much higher level.

Most people using ASP.NET are familiar with WebForms and WebServices. These high level implementations are abstractions that make it easy to build Web based application logic and ASP.NET is the driving engine that provides the underlying interface to the Web Server and routing mechanics to provide the base for these high level front end services typically used for your applications. WebForms and WebServices are merely two very sophisticated implementations of HTTP Handlers built on top of the core ASP.NET framework.

However, ASP.NET provides much more flexibility from a lower level. The HTTP Runtime and the request pipeline provide all the same power that went into building the WebForms and WebService implementations – these implementations were actually built with .NET managed code. And all of that same functionality is available to you, should you decide you need to build a custom platform that sits at a level a little lower than WebForms.

WebForms are definitely the easiest way to build most Web interfaces, but if you’re building custom content handlers, or have special needs for processing the incoming or outgoing content, or you need to build a custom application server interface to another application, using these lower level handlers or modules can provide better performance and more control over the actual request process. With all the power that the high level implementations of WebForms and WebServices provide they also add quite a bit of overhead to requests that you can bypass by working at a lower level.

What is ASP.NETLet’s start with a simple definition: What is ASP.NET? I like to define ASP.NET as follows:

Page 5: Application Architecture For

ASP.NET is a sophisticated engine using Managed Code for front to back processing of Web Requests.

It's much more than just WebForms and Web Services…

ASP.NET is a request processing engine. It takes an incoming request and passes it through its internal pipeline to an end point where you as a developer can attach code to process that request. This engine is actually completely separated from HTTP or the Web Server. In fact, the HTTP Runtime is a component that you can host in your own applications outside of IIS or any server side application altogether. For example, you can host the ASP.NET runtime in a Windows form (check out http://www.west-wind.com/presentations/aspnetruntime/aspnetruntime.asp for more detailed information on runtime hosting in Windows Forms apps).

The runtime provides a complex yet very elegant mechanism for routing requests through this pipeline. There are a number of interrelated objects, most of which are extensible either via subclassing or through event interfaces at almost every level of the process, so the framework is highly extensible. Through this mechanism it’s possible to hook into very low level interfaces such as the caching, authentication and authorization. You can even filter content by pre or post processing requests or simply route incoming requests that match a specific signature directly to your code or another URL. There are a lot of different ways to accomplish the same thing, but all of the approaches are straightforward to implement, yet provide flexibility in finding the best match for performance and ease of development.

The entire ASP.NET engine was completely built in managed

code and all extensibility is provided via

managed code extensions.

The entire ASP.NET engine was completely built in managed code and all of the extensibility functionality is provided via managed code extensions. This is a testament to the power of the .NET framework in its ability to build sophisticated and very performance oriented architectures. Above all though, the most impressive part of ASP.NET is the thoughtful design that makes the architecture easy to work with, yet provides hooks into just about any part of the request processing.

With ASP.NET you can perform tasks that previously were the domain of ISAPI extensions and filters on IIS – with some limitations, but it’s a lot closer than say ASP was. ISAPI is a low level Win32 style API that had a very meager interface and was very difficult to work for sophisticated applications. Since ISAPI is very low level it also is

Page 6: Application Architecture For

very fast, but fairly unmanageable for application level development. So, ISAPI has been mainly relegated for some time to providing bridge interfaces to other application or platforms. But ISAPI isn’t dead by any means. In fact, ASP.NET on Microsoft platforms interfaces with IIS through an ISAPI extension that hosts .NET and through it the ASP.NET runtime. ISAPI provides the core interface from the Web Server and ASP.NET uses the unmanaged ISAPI code to retrieve input and send output back to the client. The content that ISAPI provides is available via common objects like HttpRequest and HttpResponse that expose the unmanaged data as managed objects with a nice and accessible interface.

From Browser to ASP.NETLet’s start at the beginning of the lifetime of a typical ASP.NET Web Request. A request starts on the browser where the user types in a URL, clicks on a hyperlink or submits an HTML form (a POST request). Or a client application might make call against an ASP.NET based Web Service, which is also serviced by ASP.NET. On the server side the Web Server – Internet Information Server 5 or 6 – picks up the request. At the lowest level ASP.NET interfaces with IIS through an ISAPI extension. With ASP.NET this request usually is routed to a page with an .aspx extension, but how the process works depends entirely on the implementation of the HTTP Handler that is set up to handle the specified extension. In IIS .aspx is mapped through an ‘Application Extension’ (aka. as a script map) that is mapped to the ASP.NET ISAPI dll - aspnet_isapi.dll. Every request that fires ASP.NET must go through an extension that is registered and points at aspnet_isapi.dll.

Depending on the extension ASP.NET routes the request to an appropriate handler that is responsible for picking up requests. For example, the .asmx extension for Web Services routes requests not to a page on disk but a specially attributed class that identifies it as a Web Service implementation. Many other handlers are installed with ASP.NET and you can also define your own. All of these HttpHandlers are mapped to point at the ASP.NET ISAPI extension in IIS, and configured in web.config to get routed to a specific HTTP Handler implementation. Each handler, is a .NET class that handles a specific extension which can range from simple Hello World behavior with a couple of lines of code, to very complex handlers like the ASP.NET Page or Web Service implementations. For now, just understand that an extension is the basic mapping mechanism that ASP.NET uses to receive a request from ISAPI and then route it to a specific handler that processes the request.

ISAPI is the first and highest

performance entry point into IIS for

custom Web Request handling.

The ISAPI Connection

Page 7: Application Architecture For

ISAPI is a low level unmanged Win32 API. The interfaces defined by the ISAPI spec are very simplistic and optimized for performance. They are very low level – dealing with raw pointers and function pointer tables for callbacks - but they provide he lowest and most performance oriented interface that developers and tool vendors can use to hook into IIS. Because ISAPI is very low level it’s not well suited for building application level code, and ISAPI tends to be used primarily as a bridge interface to provide Application Server type functionality to higher level tools. For example, ASP and ASP.NET both are layered on top of ISAPI as is Cold Fusion, most Perl, PHP and JSP implementations running on IIS as well as many third party solutions such as my own Web Connection framework for Visual FoxPro. ISAPI is an excellent tool to provide the high performance plumbing interface to higher level applications, which can then abstract the information that ISAPI provides. In ASP and ASP.NET, the engines abstract the information provided by the ISAPI interface in the form of objects like Request and Response that read their content out of the ISAPI request information. Think of ISAPI as the plumbing. For ASP.NET the ISAPI dll is very lean and acts merely as a routing mechanism to pipe the inbound request into the ASP.NET runtime. All the heavy lifting and processing, and even the request thread management happens inside of the ASP.NET engine and your code.

As a protocol ISAPI supports both ISAPI extensions and ISAPI Filters. Extensions are a request handling interface and provide the logic to handle input and output with the Web Server – it’s essentially a transaction interface. ASP and ASP.NET are implemented as ISAPI extensions. ISAPI filters are hook interfaces that allow the ability to look at EVERY request that comes into IIS and to modify the content or change the behavior of functionalities like Authentication. Incidentally ASP.NET maps ISAPI-like functionality via two concepts: Http Handlers (extensions) and Http Modules (filters). We’ll look at these later in more detail.

ISAPI is the initial code point that marks the beginning of an ASP.NET request. ASP.NET maps various extensions to its ISAPI extension which lives in the .NET Framework directory:

<.NET FrameworkDir>\aspnet_isapi.dll

You can interactively see these mapping in the IIS Service manager as shown in Figure 1. Look at the root of the Web Site and the Home Directory tab, then Configuration | Mappings.

Page 8: Application Architecture For

Figure 1: IIS maps various extensions like .ASPX to the ASP.NET ISAPI extension. Through this mechanism requests are routed into ASP.NET's processing pipeline at the Web Server level.

You shouldn’t set these extensions manually as .NET requires a number of them. Instead use the aspnet_regiis.exe utility to make sure that all the various scriptmaps get registered properly:

cd <.NetFrameworkDirectory>aspnet_regiis - i

This will register the particular version of the ASP.NET runtime for the entire Web site by registering the scriptmaps and setting up the client side scripting libraries used by the various controls for uplevel browsers. Note that it registers the particular version of the CLR that is installed in the above directory. Options on aspnet_regiis let you configure virtual directories individually. Each version of the .NET framework has its own version of aspnet_regiis and you need to run the appropriate one to register a site or virtual directory for a specific version of the .NET framework. Starting with ASP.NET 2.0, an IIS ASP.NET configuration page lets you pick the .NET version interactively in the IIS management console.

IIS 5 and 6 work differently

When a request comes in, IIS checks for the script map and routes the request to the aspnet_isapi.dll. The operation of the DLL and how it gets to the ASP.NET runtime varies significantly between IIS 5 and 6. Figure 2 shows a rough overview of the flow.

In IIS 5 hosts aspnet_isapi.dll directly in the inetinfo.exe process or one of its isolated worker processes if you have isolation set to medium or high for the Web or virtual

Page 9: Application Architecture For

directory. When the first ASP.NET request comes in the DLL will spawn a new process in another EXE – aspnet_wp.exe – and route processing to this spawned process. This process in turn loads and hosts the .NET runtime. Every request that comes into the ISAPI DLL then routes to this worker process via Named Pipe calls.

Figure 2 – Request flow from IIS to the ASP.NET Runtime and through the request processing pipeline from a high level. IIS 5 and IIS 6 interface with ASP.NET in different ways but the overall process once it reaches the ASP.NET Pipeline is the same.

IIS6, unlike previous servers, is fully optimized

for ASP.NET

IIS 6 – Viva the Application PoolIIS 6 changes the processing model significantly in that IIS no longer hosts any foreign executable code like ISAPI extensions directly. Instead IIS 6 always creates a separate worker process – an Application Pool – and all processing occurs inside of this process, including execution of the ISAPI dll. Application Pools are a big improvement for IIS 6, as they allow very granular control over what executes in a given process. Application

Page 10: Application Architecture For

Pools can be configured for every virtual directory or the entire Web site, so you can isolate every Web application easily into its own process that will be completely isolated from any other Web application running on the same machine. If one process dies it will not affect any others at least from the Web processing perspective.

In addition, Application Pools are highly configurable. You can configure their execution security environment by setting an execution impersonation level for the pool which allows you to customize the rights given to a Web application in that same granular fashion. One big improvement for ASP.NET is that the Application Pool replaces most of the ProcessModel entry in machine.config. This entry was difficult to manage in IIS 5, because the settings were global and could not be overridden in an application specific web.config file. When running IIS 6, the ProcessModel setting is mostly ignored and settings are instead read from the Application Pool. I say mostly – some settings, like the size of the ThreadPool and IO threads still are configured through this key since they have no equivalent in the Application Pool settings of the server.

Because Application Pools are external executables these executables can also be easily monitored and managed. IIS 6 provides a number of health checking, restarting and timeout options that can detect and in many cases correct problems with an application. Finally IIS 6’s Application Pools don’t rely on COM+ as IIS 5 isolation processes did which has improved performance and stability especially for applications that need to use COM objects internally.

Although IIS 6 application pools are separate EXEs, they are highly optimized for HTTP operations by directly communicating with a kernel mode HTTP.SYS driver. Incoming requests are directly routed to the appropriate application pool. InetInfo acts merely as an Administration and configuration service – most interaction actually occurs directly between HTTP.SYS and the Application Pools, all of which translates into a more stable and higher performance environment over IIS 5. This is especially true for static content and ASP.NET applications.

An IIS 6 application pool also has intrinsic knowledge of ASP.NET and ASP.NET can communicate with new low level APIs that allow direct access to the HTTP Cache APIs which can offload caching from the ASP.NET level directly into the Web Server’s cache.

In IIS 6, ISAPI extensions run in the Application Pool worker process. The .NET Runtime also runs in this same process, so communication between the ISAPI extension and the .NET runtime happens in-process which is inherently more efficient than the named pipe interface that IIS 5 must use. Although the IIS hosting models are very different the actual interfaces into managed code are very similar – only the process in getting the request routed varies a bit.

The ISAPIRuntime.Pro

cessRequest() method is the first

Page 11: Application Architecture For

entry point into ASP.NET

Getting into the .NET runtime

The actual entry points into the .NET Runtime occur through a number of undocumented classes and interfaces. Little is known about these interfaces outside of Microsoft, and Microsoft folks are not eager to talk about the details, as they deem this an implementation detail that has little effect on developers building applications with ASP.NET.

The worker processes ASPNET_WP.EXE (IIS5) and W3WP.EXE (IIS6) host the .NET runtime and the ISAPI DLL calls into small set of unmanged interfaces via low level COM that eventually forward calls to an instance subclass of the ISAPIRuntime class. The first entry point to the runtime is the undocumented ISAPIRuntime class which exposes the IISAPIRuntime interface via COM to a caller. These COM interfaces low level IUnknown based interfaces that are meant for internal calls from the ISAPI extension into ASP.NET. Figure 3 shows the interface and call signatures for the IISAPIRuntime interface as shown in Lutz Roeder’s excellent .NET Reflector tool (http://www.aisto.com/roeder/dotnet/). Reflector an assembly viewer and disassembler that makes it very easy to look at medadata and disassembled code (in IL, C#, VB) as shown in Figure 3. It’s a great way to explore the bootstrapping process.

Page 12: Application Architecture For

Figure 3 – If you want to dig into the low level interfaces open up Reflector, and point at the System.Web.Hosting namespace. The entry point to ASP.NET occurs through a managed COM Interface called from the ISAPI dll, that receives an unmanaged pointer to the ISAPI ECB. The ECB contains has access to the full ISAPI interface to allow retrieving request data and sending back to IIS.

The IISAPIRuntime interface acts as the interface point between the unmanaged code coming from the ISAPI extension (directly in IIS 6 and indirectly via the Named Pipe handler in IIS 5). If you take a look at this class you’ll find a ProcessRequest method with a signature like this:

[return: MarshalAs(UnmanagedType.I4)]int ProcessRequest([In] IntPtr ecb, [In, MarshalAs(UnmanagedType.I4)] int useProcessModel);

The ecb parameter is the ISAPI Extension Control Block (ECB) which is passed as an unmanaged resource to ProcessRequest. The method then takes the ECB and uses it as the base input and output interface used with the Request and Response objects. An ISAPI ECB contains all low level request information including server variables, an input stream for form variables as well as an output stream that is used to write data back to the client. The single ecb reference basically provides access to all of the functionality an ISAPI request has access to and ProcessRequest is the entry and exit point where this resource initially makes contact with managed code.

The ISAPI extension runs requests asynchronously. In this mode the ISAPI extension immediately returns on the calling worker process or IIS thread, but keeps the ECB for the current request alive. The ECB then includes a mechanism for letting ISAPI know when the request is complete (via ecb.ServerSupportFunction) which then releases the ECB. This asynchronous processing releases the ISAPI worker thread immediately, and offloads processing to a separate thread that is managed by ASP.NET.

ASP.NET receives this ecb reference and uses it internally to retrieve information about the current request such as server variables, POST data as well as returning output back to the server. The ecb stays alive until the request finishes or times out in IIS and ASP.NET continues to communicate with it until the request is done. Output is written into the ISAPI output stream (ecb.WriteClient()) and when the request is done, the ISAPI extension is notified of request completion to let it know that the ECB can be freed. This implementation is very efficient as the .NET classes essentially act as a fairly thin wrapper around the high performance, unmanaged ISAPI ECB.

Loading .NET – somewhat of a mysteryLet’s back up one step here: I skipped over how the .NET runtime gets loaded. Here’s where things get a bit fuzzy. I haven’t found any documentation on this process and since we’re talking about native code there’s no easy way to disassemble the ISAPI DLL and figure it out.

Page 13: Application Architecture For

My best guess is that the worker process bootstraps the .NET runtime from within the ISAPI extension on the first hit against an ASP.NET mapped extension. Once the runtime exists, the unmanaged code can request an instance of an ISAPIRuntime object for a given virtual path if one doesn’t exist yet. Each virtual directory gets its own AppDomain and within that AppDomain the ISAPIRuntime exists from which the bootstrapping process for an individual application starts. Instantiation appears to occur over COM as the interface methods are exposed as COM callable methods.

To create the ISAPIRuntime instance the System.Web.Hosting.AppDomainFactory.Create() method is called when the first request for a specific virtual directory is requested. This starts the ‘Application’ bootstrapping process. The call receives parameters for type and module name and virtual path information for the application which is used by ASP.NET to create an AppDomain and launch the ASP.NET application for the given virtual directory. This HttpRuntime derived object is created in a new AppDomain. Each virtual directory or ASP.NET application is hosted in a separate AppDomain and they get loaded only as requests hit the particular ASP.NET Application. The ISAPI extension manages these instances of the HttpRuntime objects, and routes inbound requests to the right one based on the virtual path of the request.

Page 14: Application Architecture For

Figure 4 – The transfer of the ISAPI request into the HTTP Pipeline of ASP.NET uses a number of undocumented classes and interfaces and requires several factory method calls. Each Web Application/Virtual runs in its own AppDomain with the caller holding a reference to an IISAPIRuntime interface that triggers the ASP.NET request processing.

Back in the runtimeAt this point we have an instance of ISAPIRuntime active and callable from the ISAPI extension. Once the runtime is up and running the ISAPI code calls into the ISAPIRuntime.ProcessRequest() method which is the real entry point into the ASP.NET Pipeline. The flow from there is shown in Figure 4.

Remember ISAPI is multi-threaded so requests will come in on multiple threads through the reference that was returned by ApplicationDomainFactory.Create(). Listing 1 shows the disassembled code from the IsapiRuntime.ProcessRequest method that receives an ISAPI ecb object and server type as parameters. The method is thread safe, so multiple ISAPI threads can safely call this single returned object instance simultaneously.

Listing 1: The Process request method receives an ISAPI Ecb and passes it on to the Worker requestpublic int ProcessRequest(IntPtr ecb, int iWRType){

HttpWorkerRequest request1 = ISAPIWorkerRequest.CreateWorkerRequest(ecb, iWRType);

string text1 = request1.GetAppPathTranslated();string text2 = HttpRuntime.AppDomainAppPathInternal;if (((text2 == null) || text1.Equals(".")) ||

(string.Compare(text1, text2, true, CultureInfo.InvariantCulture) == 0))

{HttpRuntime.ProcessRequest(request1);return 0;

}

HttpRuntime.ShutdownAppDomain("Physical application path changed from " +

text2 + " to " + text1);return 1;

}

The actual code here is not important, and keep in mind that this is disassembled internal framework code that you’ll never deal with directly and that might change in the future. It’s meant to demonstrate what’s happening behind the scenes. ProcessRequest receives the unmanaged ECB reference and passes it on to the ISAPIWorkerRequest object which is in charge of creating the Request Context for the current request as shown in Listing 2.

The System.Web.Hosting.ISAPIWorkerRequest class is an abstract subclass of HttpWorkerRequest, whose job it is to create an abstracted view of the input and output that serves as the input for the Web application. Notice another factory method here: CreateWorkerRequest, which as a second parameter receives the type of worker request

Page 15: Application Architecture For

object to create. There are three different versions: ISAPIWorkerRequestInProc, ISAPIWorkerRequestInProcForIIS6, ISAPIWorkerRequestOutOfProc. This object is created on each incoming hit and serves as the basis for the Request and Response objects which will receive their data and streams from the data provided by the WorkerRequest. The abstract HttpWorkerRequest class is meant to provide a highlevel abstraction around the low level interfaces so that regardless of where the data comes from, whether it’s a CGI Web Server, the Web Browser Control or some custom mechanism you use to feed the data to the HTTP Runtime. The key is that ASP.NET can retrieve the information consistently.

In the case of IIS the abstraction is centered around an ISAPI ECB block. In our request processing, ISAPIWorkerRequest hangs on to the ISAPI ECB and retrieves data from it as needed. Listing 2 shows how the query string value is retrieved for example.

Listing 2: An ISAPIWorkerRequest method that uses the unmanged // *** Implemented in ISAPIWorkerRequestpublic override byte[] GetQueryStringRawBytes(){

byte[] buffer1 = new byte[this._queryStringLength];if (this._queryStringLength > 0){

int num1 = this.GetQueryStringRawBytesCore(buffer1, this._queryStringLength);

if (num1 != 1){

throw new HttpException( "Cannot_get_query_string_bytes");

}}return buffer1;

}

// *** Implemented in a specific implementation class ISAPIWorkerRequestInProcIIS6internal override int GetQueryStringCore(int encode, StringBuilder buffer, int size){

if (this._ecb == IntPtr.Zero){

return 0;}return UnsafeNativeMethods.EcbGetQueryString(this._ecb, encode,

buffer, size);}

ISAPIWorkerRequest implements a high level wrapper method, that calls into lower level Core methods, which are responsible for performing the actual access to the unmanaged APIs – or the ‘service level implementation’. The Core methods are implemented in the specific ISAPIWorkerRequest instance subclasses and thus provide the specific

Page 16: Application Architecture For

implementation for the environment that it’s hosted in. This makes for an easily pluggable environment where additional implementation classes can be provided later as newer Web Server interfaces or other platforms are targeted by ASP.NET. There’s also a helper class System.Web.UnsafeNativeMethods. Many of these methods operate on the ISAPI ECB structure performing unmanaged calls into the ISAPI extension.

HttpRuntime, HttpContext, and HttpApplication – Oh my

When a request hits, it is routed to the ISAPIRuntime.ProcessRequest() method. This method in turn calls HttpRuntime.ProcessRequest that does several important things (look at System.Web.HttpRuntime.ProcessRequestInternal with Reflector):

Create a new HttpContext instance for the request Retrieves an HttpApplication Instance Calls HttpApplication.Init() to set up Pipeline Events Init() fires HttpApplication.ResumeProcessing() which starts the ASP.NET

pipeline processing

First a new HttpContext object is created and it is passed the ISAPIWorkerRequest that wrappers the ISAPI ECB. The Context is available throughout the lifetime of the request and ALWAYS accessible via the static HttpContext.Current property. As the name implies, the HttpContext object represents the context of the currently active request as it contains references to all of the vital objects you typically access during the request lifetime: Request, Response, Application, Server, Cache. At any time during request processing HttpContext.Current gives you access to all of these object.

The HttpContext object also contains a very useful Items collection that you can use to store data that is request specific. The context object gets created at the begging of the request cycle and released when the request finishes, so data stored there in the Items collection is specific only to the current request. A good example use is a request logging mechanism where you want to track start and end times of a request by hooking the Application_BeginRequest and Application_EndRequest methods in Global.asax as shown in Listing 3. HttpContext is your friend – you’ll use it liberally if you need data in different parts of the request or page processing.

Listing 3 – Using the HttpContext.Items collection lets you save data between pipeline eventsprotected void Application_BeginRequest(Object sender, EventArgs e){

//*** Request Loggingif (App.Configuration.LogWebRequests)

Context.Items.Add("WebLog_StartTime",DateTime.Now);}

protected void Application_EndRequest(Object sender, EventArgs e){

// *** Request Loggingif (App.Configuration.LogWebRequests)

Page 17: Application Architecture For

{try {

TimeSpan Span = DateTime.Now.Subtract( (DateTime) Context.Items["WebLog_StartTime"] );

int MiliSecs = Span.TotalMilliseconds;

// do your logging WebRequestLog.Log(App.Configuration.ConnectionString, true,MilliSecs);

}}

Once the Context has been set up, ASP.NET needs to route your incoming request to the appropriate application/virtual directory by way of an HttpApplication object. Every ASP.NET application must be set up as a Virtual (or Web Root) directory and each of these ‘applications’ are handled independently.

The HttpApplication is

like a master of ceremonies – it is

where the processing action

starts

Master of your domain: HttpApplicationEach request is routed to an HttpApplication object. The HttpApplicationFactory class creates a pool of HttpApplication objects for your ASP.NET application depending on the load on the application and hands out references for each incoming request. The size of the pool is limited to the setting of the MaxWorkerThreads setting in machine.config’s ProcessModel Key, which by default is 20.

The pool starts out with a smaller number though; usually one and it then grows as multiple simulataneous requests need to be processed. The Pool is monitored so under load it may grow to its max number of instances, which is later scaled back to a smaller number as the load drops.

HttpApplication is the outer container for your specific Web application and it maps to the class that is defined in Global.asax. It’s the first entry point into the HTTP Runtime that you actually see on a regular basis in your applications. If you look in Global.asax (or the code behind class) you’ll find that this class derives directly from HttpApplication:

public class Global : System.Web.HttpApplication

Page 18: Application Architecture For

HttpApplication’s primary purpose is to act as the event controller of the Http Pipeline and so its interface consists primarily of events. The event hooks are extensive and include:

BeginRequest AuthenticateRequest AuthorizeRequest ResolveRequestCache AquireRequestState PreRequestHandlerExecute …Handler Execution… PostRequestHandlerExecute ReleaseRequestState UpdateRequestCache EndRequest

Each of these events are also implemented in the Global.asax file via empty methods that start with an Application_ prefix. For example, Application_BeginRequest(), Application_AuthorizeRequest(). These handlers are provided for convenience since they are frequently used in applications and make it so that you don’t have to explicitly create the event handler delegates.

It’s important to understand that each ASP.NET virtual application runs in its own AppDomain and that there inside of the AppDomain multiple HttpApplication instances running simultaneously, fed out of a pool that ASP.NET manages. This is so that multiple requests can process at the same time without interfering with each other.

To see the relationship between the AppDomain, Threads and the HttpApplication check out the code in Listing 4.

Listing 4 – Showing the relation between AppDomain, Threads and HttpApplication instancesprivate void Page_Load(object sender, System.EventArgs e){

// Put user code to initialize the page herethis.ApplicationId = ((HowAspNetWorks.Global)

HttpContext.Current.ApplicationInstance).ApplicationId ;

this.ThreadId = AppDomain.GetCurrentThreadId();

this.DomainId = AppDomain.CurrentDomain.FriendlyName;

this.ThreadInfo = "ThreadPool Thread: " + System.Threading.Thread.CurrentThread.IsThreadPoolThread.ToString() +

"<br>Thread Apartment: " + System.Threading.Thread.CurrentThread.ApartmentState.ToString();

// *** Simulate a slow request so we can see multiple

Page 19: Application Architecture For

// requests side by side.System.Threading.Thread.Sleep(3000);

}

This is part of a demo is provided with your samples and the running form is shown in Figure 5. To check this out run two instances of a browser and hit this sample page and watch the various Ids.

Figure 5 – You can easily check out how AppDomains, Application Pool instances, and Request Threads interact with each other by running a couple of browser instances simultaneously. When multiple requests fire you’ll see the thread and Application ids change, but the AppDomain staying the same.

You’ll notice that the AppDomain ID stays steady while thread and HttpApplication Ids change on most requests, although they likely will repeat. HttpApplications are running out of a collection and are reused for subsequent requests so the ids repeat at times. Note though that Application instance are not tied to a specific thread – rather they are assigned to the active executing thread of the current request.

Page 20: Application Architecture For

Threads are served from the .NET ThreadPool and by default are Multithreaded Apartment (MTA) style threads. You can override this apartment state in ASP.NET pages with the ASPCOMPAT="true" attribute in the @Page directive. ASPCOMPAT is meant to provide COM components a safe environment to run in and ASPCOMPAT uses special Single Threaded Apartment (STA) threads to service those requests. STA threads are set aside and pooled separately as they require special handling.

The fact that these HttpApplication objects are all running in the same AppDomain is very important. This is how ASP.NET can guarantee that changes to web.config or individual ASP.NET pages get recognized throughout the AppDomain. Making a change to a value in web.config causes the AppDomain to be shut down and restarted. This makes sure that all instances of HttpApplication see the changes made because when the AppDomain reloads the changes from ASP.NET are re-read at startup. Any static references are also reloaded when the AppDomain so if the application reads values from App Configuration settings these values also get refreshed.

To see this in the sample, hit the ApplicationPoolsAndThreads.aspx page and note the AppDomain Id. Then go in and make a change in web.config (add a space and save). Then reload the page. You’ll l find that a new AppDomain has been created.

In essence the Web Application/Virtual completely ‘restarts’ when this happens. Any requests that are already in the pipeline processing will continue running through the existing pipeline, while any new requests coming in are routed to the new AppDomain. In order to deal with ‘hung requests’ ASP.NET forcefully shuts down the AppDomain after the request timeout period is up even if requests are still pending. So it’s actually possible that two AppDomains exist for the same HttpApplication at a given point in time as the old one’s shutting down and the new one is ramping up. Both AppDomains continue to serve their clients until the old one has run out its pending requests and shuts down leaving just the new AppDomain running.

Flowing through the ASP.NET Pipeline

The HttpApplication is responsible for the request flow by firing events that signal your application that things are happening. This occurs as part of the HttpApplication.Init() method (look at System.Web.HttpApplication.InitInternal and HttpApplication.ResumeSteps() with Reflector) which sets up and starts a series of events in succession including the call to execute any handlers. The event handlers map to the events that are automatically set up in global.asax, and they also map any attached HTTPModules, which are essentially an externalized event sink for the events that HttpApplication publishes.

Both HttpModules and HttpHandlers are loaded dynamically via entries in Web.config and attached to the event chain. HttpModules are actual event handlers that hook specific HttpApplication events, while HttpHandlers are an end point that gets called to handle ‘application level request processing’.

Page 21: Application Architecture For

Both Modules and Handlers are loaded and attached to the call chain as part of the HttpApplication.Init() method call. Figure 6 shows the various events and when they happen and which parts of the pipeline they affect.

Figure 6 – Events flowing through the ASP.NET HTTP Pipeline. The HttpApplication object’s events drive requests through the pipeline. Http Modules can intercept these events and override or enhance existing functionality.

HttpContext, HttpModules and HttpHandlers

The HttpApplication itself knows nothing about the data being sent to the application – it is a merely messaging object that communicates via events. It fires events and passes information via the HttpContext object to the called methods. The actual state data for the current request is maintained in the HttpContext object mentioned earlier. It provides all the request specific data and follows each request from beginning to end through the pipeline. Figure 7 shows the flow through ASP.NET pipeline. Notice the Context object which is your compadre from beginning to end of the request and can be used to store information in one event method and retrieve it in a later event method.

Once the pipeline is started, HttpApplication starts firing events one by one as shown in Figure 6. Each of the event handlers is fired and if events are hooked up those handlers

Page 22: Application Architecture For

execute and perform their tasks. The main purpose of this process is to eventually call the HttpHandler hooked up to a specific request. Handlers are the core processing mechanism for ASP.NET requests and usually the place where any application level code is executed. Remember that the ASP.NET Page and Web Service frameworks are implemented as HTTPHandlers and that’s where all the core processing of the request is handled. Modules tend to be of a more core nature used to prepare or post process the Context that is delivered to the handler. Typical default handlers in ASP.NET are Authentication, Caching for pre-processing and various encoding mechanisms on post processing.

There’s plenty of information available on HttpHandlers and HttpModules so to keep this article a reasonable length I’m going to provide only a brief overview of handlers.

HttpModulesAs requests move through the pipeline a number of events fire on the HttpApplication object. We’ve already seen that these events are published as event methods in Global.asax. This approach is application specific though which is not always what you want. If you want to build generic HttpApplication event hooks that can be plugged into any Web applications you can use HttpModules which are reusable and don’t require application specific code except for an entry in web.config.

Modules are in essence filters – similar in functionality to ISAPI filters at the ASP.NET request level. Modules allow hooking events for EVERY request that pass through the ASP.NET HttpApplication object. These modules are stored as classes in external assemblies that are configured in web.config and loaded when the Application starts. By implementing specific interfaces and methods the module then gets hooked up to the HttpApplication event chain. Multiple HttpModules can hook the same event and event ordering is determined by the order they are declared in Web.config. Here’s what a handler definition looks like in Web.config:

<configuration> <system.web> <httpModules>

<add name= "BasicAuthModule" type="HttpHandlers.BasicAuth,WebStore" /> </httpModules> </system.web></configuration>

Note that you need to specify a full typename and an assembly name without the DLL extension.

Modules allow you look at each incoming Web request and perform an action based on the events that fire. Modules are great to modify request or response content, to provide custom authentication or otherwise provide pre or post processing to every request that occurs against ASP.NET in a particular application. Many of ASP.NET’s features like the Authentication and Session engines are implemented as HTTP Modules.

Page 23: Application Architecture For

While HttpModules feel similar to ISAPI Filters in that they look at every request in that comes through an ASP.NET Application, they are limited to looking at requests mapped to a single specific ASP.NET application or virtual directory and then only against requests that are mapped to ASP.NET. Thus you can look at all ASPX pages or any of the other custom extensions that are mapped to this application. You cannot however look at standard .HTM or image files unless you explicitly map the extension to the ASP.NET ISAPI dll by adding an extension as shown in Figure 1. A common use for a module might be to filter content to JPG images in a special folder and display a ‘SAMPLE’ overlay ontop of every image by drawing ontop of the returned bitmap with GDI+.

Implementing an HTTP Module is very easy: You must implement the IHttpModule interface which contains only two methods Init() and Dispose(). The event parameters passed include a reference to the HTTPApplication object, which in turn gives you access to the HttpContext object. In these methods you hook up to HttpApplication events. For example, if you want to hook the AuthenticateRequest event with a module you would do what’s shown in Listing 5.

Listing 5: The basics of an HTTP Module are very simple to implementpublic class BasicAuthCustomModule : IHttpModule{

public void Init(HttpApplication application){

// *** Hook up any HttpApplication eventsapplication.AuthenticateRequest +=

new EventHandler(this.OnAuthenticateRequest);}public void Dispose() { }

public void OnAuthenticateRequest(object source, EventArgs eventArgs)

{HttpApplication app = (HttpApplication) source;HttpContext Context = HttpContext.Current;… do what you have to do…}

}

Remember that your Module has access the HttpContext object and from there to all the other intrinsic ASP.NET pipeline objects like Response and Request, so you can retrieve input etc. But keep in mind that certain things may not be available until later in the chain.

You can hook multiple events in the Init() method so your module can manage multiple functionally different operations in one module. However, it’s probably cleaner to separate differing logic out into separate classes to make sure the module is modular. <g> In many cases functionality that you implement may require that you hook multiple events – for example a logging filter might log the start time of a request in Begin Request and then write the request completion into the log in EndRequest.

Page 24: Application Architecture For

Watch out for one important gotcha with HttpModules and HttpApplication events: Response.End() or HttpApplication.CompleteRequest() will shortcut the HttpApplication and Module event chain. See the sidebar “Watch out for Response.End() “ for more info.

HttpHandlersModules are fairly low level and fire against every inbound request to the ASP.NET application. Http Handlers are more focused and operate on a specific request mapping, usually a page extension that is mapped to the handler.

Http Handler implementations are very basic in their requirements, but through access of the HttpContext object a lot of power is available. Http Handlers are implemented through a very simple IHttpHandler interface (or its asynchronous cousin, IHttpAsyncHandler) which consists of merely a single method – ProcessRequest() – and a single property IsReusable. The key is ProcessRequest() which gets passed an instance of the HttpContext object. This single method is responsible for handling a Web request start to finish.

Single, simple method? Must be too simple, right? Well, simple interface, but not simplistic in what’s possible! Remember that WebForms and WebServices are both implemented as Http Handlers, so there’s a lot of power wrapped up in this seemingly simplistic interface. The key is the fact that by the time an Http Handler is reached all of ASP.NET’s internal objects are set up and configured to start processing of requests. The key is the HttpContext object, which provides all of the relevant request functionality to retireve input and send output back to the Web Server.

For an HTTP Handler all action occurs through this single call to ProcessRequest(). This can be as simple as:

public void ProcessRequest(HttpContext context){

context.Response.Write("Hello World");}

to a full implementation like the WebForms Page engine that can render complex forms from HTML templates. The point is that it’s up to you to decide of what you want to do with this simple, but powerful interface!

Because the Context object is available to you, you get access to the Request, Response, Session and Cache objects, so you have all the key features of an ASP.NET request at your disposal to figure out what users submitted and return content you generate back to the client. Remember the Context object – it’s your friend throughout the lifetime of an ASP.NET request!

The key operation of the handler should be eventually write output into the Respone object or more specifically the Response object’s OutputStream. This output is what actually gets sent back to the client. Behind the scenes the ISAPIWorkerRequest manages

Page 25: Application Architecture For

sending the OutputStream back into the ISAPI ecb.WriteClient method that actually performs the IIS output generation.

Figure 7 – The ASP.NET Request pipeline flows requests through a set of event interfaces that provide much flexibility. The Application acts as the hosting container that loads up the Web application and fires events as requests come in and pass through the pipeline. Each request follows a common path through the Http Filters and Modules configured. Filters can examine each request going through the pipeline and Handlers allow implementation of application logic or application level interfaces like Web Forms and Web Services. To provide Input and Output for the application the Context object provides request specific information throughout the entire process.

WebForms implements an Http Handler with a much more high level interface on top of this very basic framework, but eventually a WebForm’s Render() method simply ends up using an HtmlTextWriter object to write its final final output to the context.Response.OutputStream. So while very fancy, ultimately even a high level tool like Web forms is just a high level abstraction ontop of the Request and Response object.

You might wonder at this point whether you need to deal with Http Handlers at all. After all WebForms provides an easily accessible Http Handler implementation, so why bother with something a lot more low level and give up that flexibility?

Page 26: Application Architecture For

WebForms are great for generating complex HTML pages and business level logic that requires graphical layout tools and template backed pages. But the WebForms engine performs a lot of tasks that are overhead intensive. If all you want to do is read a file from the system and return it back through code it’s much more efficient to bypass the Web Forms Page framework and directly feed the file back. If you do things like Image Serving from a Database there’s no need to go into the Page framework – you don’t need templates and there surely is no Web UI that requires you to capture events off an Image served.There’s no reason to set up a page object and session and hook up Page level events – all of that stuff requires execution of code that has nothing to do with your task at hand. So handlers are more efficient. Handlers also can do things that aren’t possible with WebForms such as the ability to process requests without the need to have a physical file on disk, which is known as a virtual Url. To do this make sure you turn off ‘Check that file exists’ checkbox in the Application Extension dialog shown in Figure 1.

This is common for content providers, such as dynamic image processing, XML servers, URL Redirectors providing vanity Urls, download managers and the like, none of which would benefit from the WebForm engine.

Have I stooped low enough for you?

Phew – we’ve come full circle here for the processing cycle of requests. That’s a lot of low level information and I haven’t even gone into great detail about how HTTP Modules and HTTP Handlers work. It took some time to dig up this information and I hope this gives you some of the same satisfaction it gave me in understanding how ASP.NET works under the covers.

Before I’m done let’s do the quick review of the event sequences I’ve discussed in this article from IIS to handler:

IIS gets the request Looks up a script map extension and maps to aspnet_isapi.dll Code hits the worker process (aspnet_wp.exe in IIS5 or w3wp.exe in IIS6) .NET runtime is loaded IsapiRuntime.ProcessRequest() called by non-managed code IsapiWorkerRequest created once per request HttpRuntime.ProcessRequest() called with Worker Request HttpContext Object created by passing Worker Request as input HttpApplication.GetApplicationInstance() called with Context to retrieve instance

from pool HttpApplication.Init() called to start pipeline event sequence and hook up

modules and handlers HttpApplicaton.ProcessRequest called to start processing Pipeline events fire

Page 27: Application Architecture For

Handlers are called and ProcessRequest method are fired Control returns to pipeline and post request events fire

It’s a lot easier to remember how all of the pieces fit together with this simple list handy. I look at it from time to time to remember. So now, get back to work and do something non-abstract…

Although what I discuss here is based on ASP.NET 1.1, it looks that the underlying processes described here haven’t changed in ASP.NET 2.0.

Many thanks to Mike Volodarsky from Microsoft for reviewing this article and providing a few additional hints and Michele Leroux Bustamante for providing the basis for the ASP.NET Pipeline Request Flow slide. If you have any comments or questions feel free to post them on the Comment link below.HTML is often thought of as the sole domain for Web applications. But HTML's versatile display attributes are also very useful for handling data display of all sorts in desktop applications. The Visual Studio.Net start page is a good example. Coupled with a scripting/template mechanism you can build highly extendable applications that would be very difficult to build using standard Windows controls. In this article Rick introduces how to host the ASP.Net runtime in desktop applications and utilize this technology in a completely client side application using the Web Browser control. A few issues back I introduced the topic of dynamic code execution, which is not quite trivial in .Net. That article garnered all sorts of interest and questions in how to utilize this technology in your applications beyond the basics. Most of the questions centered around the apparently intriguing topic of 'executing' script pages that use ASP style syntax. Due to the size of the article I didn't have enough room to add an extensive example of how to apply this technology. I will do so this month, by rehashing this subject and showing another more powerful mechanism that's built into the .Net framework to provide an ASP.Net style scripting host for your client applications.

Hosting the ASP.Net runtimeThe .Net framework is very flexible especially in terms of the plumbing that goes into the various sub-systems that make up the core system services. So it should be no great surprise that the ASP.Net scripting runtime can be hosted in your own applications. This has several benefits over the ASP style parsing approach I showed in my last article.

First the ASP.Net runtime comes in the .Net framework and is a system component so you don't have to install anything separately. The runtime is also much more powerful than the simple script parser I introduced as it supports just about everything that ASP.Net supports for Web pages including all installed and registered languages and ASP.Net style Web form syntax. The runtime also includes the ability to determine if a page was previously compiled so it doesn't have to be recompiled each time. It handles

Page 28: Application Architecture For

updates to pages automatically and as an especially nice bonus you can debug your script pages using VS.Net debugger.

As always with .Net internals though this power comes with a price and that price is overhead and complexity. There are number of non-obvious ways to accomplish seemingly simple tasks – such as passing parameters or leaving the runtime idle for a while – and I'll introduce a set of classes that greatly simplify this process down to a few lines of code you need to run while also showing you the key things that you need to known and implement.

The good news is that calling the ASP.Net runtime from any .Net application is pretty straight forward. There are three major steps involved in this process:

1. Setting up the runtime environmentThis includes telling the runtime which directory to use as its base directory for a Web application (like a virtual directory on a Web server except here it will be all local files) and setting up a new AppDomain that the runtime can execute in. The ASP.Net runtime runs in another AppDomain and all information between your app and it run over the remoting features of .Net.

2. Creating the script pageThis page is a single page that contains ASP.Net code. This means you can use pages that contain <% %>, <%= %> and <script runat="server"> syntax as long as it runs in a single page. You also need to make sure that you use the appropriate <@Assembly> and <@Namespace> inclusion tags. The current application directory and all assemblies accessible to the current application will also be accessible by the script pages

3. Calling the actual script page to executeThis step involves telling the runtime which page to execute within the directory tree set up as a 'virtual' in the file system. ASP.Net requires this to find its base directory and associated files. To make the actual call you use the SimpleWorkerRequest class to create a Request object that is passed to the HttpRuntime's ProcessRequest method.

Using the wwAspRuntimeHost Class

To simplify the process of hosting the ASP.Net runtime I created a class that wraps steps 1 and 3. With the class the code to run a single ASP.Net request from a disk file is:

Listing 1 (Simplescript.cs): Using wwAspRuntimeHost class to execute an ASP.Net pageloHost = new wwAspRuntimeHost();

/// *** Use WebDir beneath the application as the 'virtual' called 'LocalScript'loHost.cPhysicalDirectory = Directory.GetCurrentDirectory() + "\\WebDir\\";loHost.cVirtualPath = "/LocalScript"; // Optional

Page 29: Application Architecture For

/// *** Store the output to this file on diskloHost.cOutputFile = loHost.cPhysicalDirectory + "__preview.htm";

/// *** Start the ASP.Net runtime in a separate AppDomainloHost.Start();

/// *** Run the actual page – can be called multiple times!loHost.ProcessRequest("TextRepeater.aspx","Text=Script+This&Repeat=3");

/// *** view the output file in Web Browser Controlthis.oBrowser.Navigate(loHost.cOutputHTML);

/// *** Shutdown runtime – unload AppDomain loHost.Stop();

You start by instantiating the runtime object and setting the physical disk path where the ASP.Net application is hosted – this is the directory where scripts and other script content like images go. The start method then starts up the ASP.Net runtime in a new AppDomain. This process is delegated to a Proxy class – AspRuntimeHostProxy that actually performs all the work. The wwAspRuntimeHost class itself simply is a wrapper that has a reference to the proxy and manages this remote proxy instance by providing a cleaner class interface and error handling for any problems with the remoting required to go over AppDomain boundaries.

Once the Start() method's been called you can make one or more calls to ProcessRequest() with the name of the page to execute in the local directory you set up in cPhysicalPath. Any relative path to an ASP.Net page can be used using syntax like "textRepeater.aspx" or "subdir\test.aspx". You can also pass an optional querystring made up of key value pairs that can be used be retrieved by the ASP.Net page. This serves as a simple though limited parameter mechanism. I'll show how to pass complex parameters later.

In order to generate output from a page request you need to specify an output file with a full path in the cOutputFile property. This file will receive any parsed output that has been parsed in the ASP.Net runtime – in most cases the final HTML result from your script. Keep in mind that although you'll typically generate HTML for display in some HTML rendering format like a Web Browser or a Web Browser control (see figure one) you can generate output for anything. I often use templates for code and documentation generation which is not HTML.

Listing 2 shows an example of simple script page that is a TextRepeater – you type in a string on the form and the script page repeats the form as many times as you specify in the querystring. Figure 1 shows the output from form displayed in a Web Browser control.

Listing 2 (ASP.Net C#): A simple ASP.Net page executed locally<%string lcRepeat = Request.QueryString["Repeat"];

Page 30: Application Architecture For

if (lcRepeat == null) lcRepeat = "1";this.Repeat = Int32.Parse(lcRepeat);%><html><head>… omitted for brevity</head><body style="font: Normal Normal 10pt verdana"><h2>Desktop Scripting Sample</h2><hr>Method RepeatText value: <%= this.RepeatText(Request.QueryString["Text"],this.Repeat) %><hr><small>Time is: <b><%= System.DateTime.Now.ToString() %></b></small></body></html>

<script runat="server" language="C#">//*** Property added to Page classpublic int Repeat = 3;

//*** Page Class methodstring RepeatText(string lcValue,int lnCount) { string lcOutput = ""; for (int x=0; x < lnCount; x++) {

lcOutput = lcOutput + lcValue; }

return lcOutput;}</script>

This scripted page is pretty simple but it demonstrates the basic ASP style scripting behavior that you can perform on a page from embedding expressions (<%= %>) to executing code blocks (<% %>) to defining properties (Repeat) and methods (RepeatText) in the script block, which can then be accessed in the script or expressions.

You can pass simple information to the page using a query string with code like this:

this.oHost.ProcessRequest("Test.aspx","Text=Script This&Repeat=6");

Page 31: Application Architecture For

Figure 1 – a simple ASP.Net client script exceuted locally and then displayed with the WebBrowser control. The repeat count (3 here) is passed as a querystring variable to the ASP.Net script page.

Querystrings are encoded key value pairs in the same format as Web page query strings and this example sends two keys – company and repeat. The Repeat value is used in the script page using:

<%string lcRepeat = Request.QueryString["Repeat"];if (lcRepeat == null) lcRepeat = "1";this.Repeat = Int32.Parse(lcRepeat);%>

to retrieve the value and convert it into a numeric that can be used to pass to the RepeatText() method in the script. Just like ASP.Net pages you can create script pages that essentially contain properties and custom methods right inside of the script page with:

<%= this.RepeatText(Request.QueryString["Text"],this.Repeat) %>

In addition, all of the ASP.Net collections are available such as ServerVariables. But not all things that you might use in a Web app might be there, such as SERVER_NAME, REMOTE_CLIENT etc. since these don't apply to local applications. Others like SCRIPT_NAME and APPL_PHYSICAL_PATH on the other do return useful values that you might use in your application easily.

Page 32: Application Architecture For

If you want to embed images into your HTML you can do so via relative pathing in the Web directory relative to the output file. Just make sure you are generating the HTML page to be rendered into the base path so that the image pathing works when rendering the form. This is why I used:

loHost.cOutputFile = Directory.GetCurrentDirectory() + @"\WebDir\__Preview.htm";

to generate the HTML into same directory used as the ASP.Net virtual path.

The sample application shown in figure 1 lets you display any scripts in the WebDir directory of the sample. When you click on Execute you'll find that it takes 2-4 seconds for the ASP.Net runtime to start up for the first time, but subsequent calls to the same page are faster. The overhead you see here is both from the runtime loading for the first time as well as the script page being interpreted by the Just in Time Compiler. If you click Execute on the same page a few times performance is fast, but if you click on Unload (which calls the Stop() method) the runtime is unloaded and reloaded on the next hit which again incurs the 2-4 second startup time. Each time you unload the runtime each page executed must be recompiled.

You can also edit scripts by clicking on the Edit tab, which contains a textbox with the script code. If you want to change a script simply make the change and click Save and then press the Execute button again to see the changes displayed in the Web Browser control. Note that when you make changes the page needs to be recompiled so the first hit is a little slow again. You can also edit the script page in an external editor like VS.Net of course.

Once a program or script has been loaded into an AppDomain it can not be unloaded again unless you unload the AppDomain and each change made to a script page adds it to the existing type cache. Internally ASP.Net does something very similar to the process wwScripting introduced in my last article – taking a script page and turning it into a class that is compiled and run on the fly. In order to avoid having too much memory taken up by many scripts and the compilers you can unload the runtime using the Stop method of the wwAspRuntimeHost class.

Obviously you can do more complex things in these dynamic pages such as load business objects and retrieve data to display on an ASP.Net style form. We'll take a look at a more advanced and useful example later.

How it works

The wwAspRuntimeHost class is based on a couple of lower level classes – wwAspRuntimeProxy which acts as the remoting proxy reference for the ASP.Net runtime and the wwWorkerRequest class which is a subclass of the SimpleWorkerRequest class that is used to handle parameter passing to script pages. Your application talks only to the wwAspRuntimeHost class. This class acts as a wrapper

Page 33: Application Architecture For

around the Proxy class to provide error handling for remoting problems since the proxy is actually a remote object reference.

Most of the work is performed by the wwAspRuntimeProxy class which performs the nuts and bolt operation of setting up and calling the ASP.Net runtime. The first step is to create a new Application Domain for the runtime to be hosted in. Microsoft actually provides a static method – ApplicationHost.CreateApplicationHost – that provides this functionality. Unfortunately this behavior is not very flexible and exactly what's required is sparsely document. For this reason and after a fair amount of searching a more flexible solution for me was to create my own AppDomain and load the runtime into it. This allows considerably more configuration of where the Runtime finds support files (in the code below in the main application's path) and how the host is configured as a custom class. Further it doesn't require copying the application's main assembly that hosts these classes into the virtual directory's BIN directory. Listing 1 shows the code to create an ASP.Net hosting capable AppDomain. The following class methods described are all part of the wwAspRuntimeProxy class which you can find in the code reference for this article.

Listing 3 (wwAspRuntimeHost.cs): Creating the AppDomain ASP.Net can load in.public static wwAspRuntimeProxy CreateApplicationHost(Type hostType, string virtualDir, string physicalDir) {

string aspDir = HttpRuntime.AspInstallDirectory;string domainId = "ASPHOST_" +

DateTime.Now.ToString().GetHashCode().ToString("x");

string appName = "ASPHOST";AppDomainSetup setup = new AppDomainSetup();setup.ApplicationName = appName;

setup.ConfigurationFile = "web.config";

AppDomain loDomain = AppDomain.CreateDomain(domainId, null, setup);

loDomain.SetData(".appDomain", "*");loDomain.SetData(".appPath", physicalDir);loDomain.SetData(".appVPath", virtualDir);loDomain.SetData(".domainId", domainId);loDomain.SetData(".hostingVirtualPath", virtualDir);loDomain.SetData(".hostingInstallDir", aspDir);

ObjectHandle oh = loDomain.CreateInstance( hostType.Module.Assembly.FullName, hostType.FullName);

wwAspRuntimeProxy loHost = (wwAspRuntimeProxy) oh.Unwrap();

// *** Save virtual and physical to tell where app runs laterloHost.cVirtualPath = virtualDir;loHost.cPhysicalDirectory = physicalDir;

// *** Save Domain so we can unload later

Page 34: Application Architecture For

loHost.oAppDomain = loDomain;

return loHost;}

public static wwAspRuntimeProxy Start(string PhysicalPath,string VirtualPath){

wwAspRuntimeProxy loHost = wwAspRuntimeProxy.CreateApplicationHost(

typeof(wwAspRuntimeProxy), VirtualPath,PhysicalPath);

return loHost;}

public static void Stop(wwAspRuntimeProxy loHost) {

if (loHost != null) {AppDomain.Unload(loHost.oAppDomain);loHost = null;

}}

These three methods represent the main management methods of the wwAspRuntimeProxy class. The code in CreateApplicationHost code essentially creates a new application domain (think of it as a separate process within a process) and assigns a number of properties to it that the ASP.Net runtime requires. The above code is the minimal configuration required to set up an AppDomain for executing ASP.Net. Once the AppDomain exists an instance of the runtime host class will be loaded into it loDomain.CreateInstance(). From thereon out the ASP.Net runtime host exists and can be accessed over AppDomain boundaries via .Net Remoting. Luckily several built-in helper classes help with this process.

Note that these three methods are static – no instance is required to call them and they don't have access to any of the properties of the class. However the loHost instance created in CreateApplicationDomain is a full remote proxy instance and on it several properties are set to allow calling applications to keep track of where the environment was loaded via the virtual and physical path. The virtual path in a local application is nothing more than a label you'll see on error messages that ASP.Net will generate on script errors. The value should be in virtual directory format such as "/" or "/LocalScript". The physical path should point to a specific directory on your hard disk that will be the ASP.Net root directory for scripts. You can access scripts there by name or relative path. I like to use a physical path below the application's startpath and call it WebDir, or HTML or Templates. So while working on this project it's something like: D:\projects\RichUserInterface\bin\debug\WebDir\. The trailing backslash is important by the way.

A class that can host ASP.Net is a pretty simple affair – it must derive from MarshalByRefObject in order to be accessible across domains and it should implement one or more methods that can call an ASP.Net request using the HttpWorkerRequest or SimpleWorkerRequest or subclasses thereof. Listing 4 shows the ProcessRequest method

Page 35: Application Architecture For

which takes the name of ASP.Net page in server relative pathing in the format of "test.aspx" or "subdir\test.aspx".

which I'll talk about in a minute. As I mentioned to keep things simple both the static loader methods and the ProcessRequest method are contained in the same class. When the Start() method is called it returns a remote instance of the wwAspRuntimeProxy class, on which you can call the ProcessRequest() method. This method is the worker method that performs the pass through calls to the ASP.Net runtime. Listing 4 shows the implementation of this method.

Listing 4 (wwAspRuntimeHost.cs): Calling an ASP.Net page once the runtime is loaded.public virtual bool ProcessRequest(string Page, string QueryString) {

TextWriter loOutput;

try {loOutput = File.CreateText(this.cOutputHTML);

}catch (Exception ex) {

this.bError = true;this.cErrorMsg = ex.Message;return false;

}

SimpleWorkerRequest Request = new SimpleWorker(Page, QueryString, loOutput);

try {HttpRuntime.ProcessRequest(Request);

}catch(Exception ex) {

this.cErrorMsg = ex.Message;this.bError = true;return false;

}

loOutput.Close();return true;

}

The class does two things: It creates an output file stream and uses the SimpleWorker class to create a request that is passed to the HTTP Runtime. The request is essentially similar to the way that IIS receives request information in a Web Server request, except here we're only passing the absolute minmal information to the ASP.Net processing engine: The name of the page to execute and a Querystring along with a TextWriter instance to receive the output generated.

The new instance of the Request is then pass to the HttpRuntime for processing which makes the actual ASP.Net parsing call. It's important to understand that this code is being executed remotely in the created AppDomain that also hosts the ASP.Net runtime, so the

Page 36: Application Architecture For

call to this entire method (loHost.ProcessRequest()) actually runs over the AppDomain remoting architecture. This has some impact on error management.

Any errors that occur within the script code itself will be returned as ASP.Net error pages just like you would see during Web development. Figure 2 shows an error in the for loop caused by not declaring the enumerating variable. Note that this is the only way you can get error information – there's no property that is set or error exception that is triggered on this failure, other than inside of the script code itself. This is both useful and limiting – the debug information is very detailed and easily viewable as HTML, but if your app needs this error info internally there's no way to get it except parsing it out of the HTML content.

Figure 2 – Errors inside of a client scripts bring up the detailed error messages in HTML format. No direct error info is returned to the calling application however.

Passing parameters to the ASP.Net page

So far I've shown you how to do basic scripting which all of itself is very powerful. However, when building template based applications it's not good enough to be able to

Page 37: Application Architecture For

process code in templates, but you have to also be able to receive data from the calling application. In the examples above we've been limited by small text parameters that can be passed via QueryString. While you can probably use the QueryString to pass around serialized data from objects and datasets, this is really messy and requires too much code on both ends of the script calling mechanism.

The idea of a desktop application that utilizes scripts is that the application performs the main processing while the scripts act as the HTML display mechanism. In order to do this we need to be able to pass complex data to our script pages.

My wwAspRuntimeProxy class provides an oParameterData property that you can assign any value to and it will pass this value to the ASP.Net application as a Context item named "Content" which can then be retrieved on a form. From the application code you'd do:

Listing 5 (Simplescript.cs): Using the wwAspRuntime Class to execute an ASP.Net pageloHost = AspRuntimeProxy.Start(Directory.GetCurrentDirectory() +

@"\WebDir\","/LocalScript");

loHost.cOutputFile = Directory.GetCurrentDirectory() + @"\WebDir\__Preview.htm";

…cCustomer loCust = new cCustomer();loCust.cCompany = "West Wind Technologies";loHost.ParameterData = loCust;

this.oHost.ProcessRequest("PassObject.aspx",null);

To get this to work a little more work and a few changes are required. The SimpleWorkerRequest class doesn't provide for passing properties or content to the ASP.Net page directly. However we can subclass it and implement one of its internal methods that hook into the HttpRuntime processing pipeline. Specifically we can implement the SetEndOfSendNotification() method which receives a reference to the HTTP Context object that is accessible to your ASP.Net script pages and assign an object reference to it. Listing 5 shows an implementation of this class that takes the ParameterData property and stores into the Context object.

Listing 6 (wwAspRuntimeHost.cs): Implementing SimpleWorker request to pass objects to script pages. public class wwWorkerRequest : SimpleWorkerRequest{

public object ParameterData = null; // object to pass

// *** Must implement the constructor public wwWorkerRequest( string Page, string QueryString, TextWriter Output ) : base( Page, QueryString, Output ) {}

public override void SetEndOfSendNotification(

Page 38: Application Architecture For

EndOfSendNotification callback, object extraData )

{base.SetEndOfSendNotification( callback, extraData );

if (this.ParameterData != null) {

HttpContext context = extraData as HttpContext;if( context != null ){

/// *** Add any extra data here to the

context.Items.Add("Content",this.ParameterData);}

}}

}

First we need to implement the constructor by simply forwarding the parameters to the base class. The SetEndOfSendNotification method gets fired just before processing is handed over to the ASP.Net page after any request data has been provided. The extraData parameter at this point contains an instance of the HttpContext object which you can access in your ASP.Net pages with:

object loData = this.Context.Items["Content"];

And voila you can now access object data. This subclass passes a single object – which for most purposes should be enough. If you need to pass more than one you can simply create a composite object and hang multiple object references off that one to pass multiple items. Of course you can also subclass this class on your own and add as many properties to the class to pass as needed into the Contextobject. Note that objects passed in this fashion including any subobjects must be marked as Serializable:

[Serializable()]public class cCustomer {

public string cCompany = "West Wind Technologies";public string cName = "Rick Strahl";public string cAddress = "32 Kaiea Place";public string cCity = "Paia";public string cState = "HI";public string cZip = "96779";public string cEmail = "[email protected]";public cPhones oPhones = null;

public cCustomer() {

this.oPhones = new cPhones();}

}

Alternately a class can derive from MarshalByRefObject to be able to be accessed over the wire as well:

Page 39: Application Architecture For

public class cPhones : MarshalByRefObject{

public string Phone = "808 579-8342";public string Fax = "808 579-8342";

}

Before we can utilize this functionality we need to change a couple of things in the wwAspRuntimeProxy class. First we need to add a parameter called ParameterData which will hold the data we want to pass to the ASP.Net application. Next we need to change the code in the ProcessRequest method to handle our customer worker request class to use the wwWorkerRequest class instead of SimpleWorkerRequest:

wwWorkerRequest Request = new wwWorkerRequest(Page, QueryString, loOutput);Request.ParameterData = this.ParameterData;

We also need to pass the ParameterData property forward. To execute a script with the object contained with in it check out the PassObject.aspx script page shown in Listing 6.

Listing 6 (PassObject.aspx): Receiving an object in script code from the calling application<%@assembly name="AspNetHosting"%><%@import namespace="AspNetHosting"%><% this.oCust = (cCustomer) this.Context.Items["Content"]; %><html><head><style>H2 {background: Navy;color: White;font-size: 18pt;height: 24pt;}</style></head><body style="font: Normal Normal 10pt verdana;background:LightYellow"><h2>Object Passing Demo</h2><hr>Customer Name: <%= this.oCust.cName %><br>Company: <%= this.ReturnCustomerInfo() %><br>Address: <%= this.oCust.cAddress %><br>City: <%= this.oCust.cCity %>, <%= this.oCust.cState %> <%= this.oCust.cZip %><p>Phone: <%= this.oCust.oPhones.Phone %><br>Fax: <%= this.oCust.oPhones.Fax %><hr><small>Physical Path: <b><%= Request.ServerVariables["APPL_PHYSICAL_PATH"] %></b><br>Script Name: <b><%= Request.ServerVariables["SCRIPT_NAME"] %></b><br>Time is: <b><%= System.DateTime.Now.ToString() %></b></small></body></html>

Page 40: Application Architecture For

<script runat="server" language="C#">//*** Property added to Page classpublic cCustomer oCust = null;

//*** Page Class methodstring ReturnCustomerInfo() { return this.oCust.cCompany;}</script>

There are a couple of important points here. First notice that you need to import the assembly and namespace of any classes that you want to use. Since I declared the assembly in my main application (RichUserInterface.exe with a default namespace of RichUserInterface) I have to include the Exe file as an assembly reference. If you require any other non System namespaces or assemblies you will have to reference those as well.

You should omit the .exe or .DLL extensions of any included assemblies. If you try to run with the extension you will get an error as the runtime tries to append the extensions as it searches for the file.

Since we imported the namespace and assembly we can then reference our value by its proper type and add it to a property that I added to the script page (oCust). To assign the value we have to cast it to the proper cCustomer type.

<% this.oCust = (cCustomer) this.Context.Items["Content"]; %>

Once this has been done, you can access this object as needed by using its property values. To embed it into the page you can then just use syntax like this:

Customer Name: <%= this.oCust.cName %>

You can also call methods this way. For example, if you add this method to the customer object:

public string CityZipStateString() { return this.cCity + ", " + this.cState + " " + this.cZip;}

You can then call it from the script page like this:

City: <%= this.oCust.CityZipStateString() %>

This means that you can easily execute business logic right within a script page. However, I would recommend you try to minimize the amount of code you run within a script page, rather relying on it to provide the dynamic and customizable interface for the application. So rather than passing an ID via the querystring, then using the object to load the data to display, instead use the application to perform the load operation and simply

Page 41: Application Architecture For

pass the object to the page to be displayed. The main catch is that the object passed must be in some way serializable to pass over the AppDomain boundaries.

Providing POST data to the script page

Another useful interaction mechanism between the client and the script page is to provide POST data to the script page.  This POST data can then be accessed just like on a true Web Form with the Request.Form collection. In fact with a little bit of trickery of using the the IE WebBrowser control you could easily  create an offline viewer for a Web site that would not require a Web server at all. This could be handy for example for shipping a dynamic Web site on a CD. The process to do this, like passing complex objects in the Context object, requires that you override methods in the SimpleWorkerRequest class. With POST a number of things should be set: The HTTP Verb, the content type and the content length all of which requires overriding of three separate methods.  To support POST operations on the runtime the SimpleWorkerRequest subclass must override three methods: GetHttpVerbName() which should return POST, GetKnownRequestHeader() which should return the content type and length, and the GetPreloadedEntityBody() which should return the actual POST buffer. In order for the client code to set the post buffer I added a AddPostKey() method to the wwAspRuntimeHost class, which uses internal PostData and PostContentType properties to hold the state of the post operations.  The implementation on the SimpleWorkerRequest subclass requires the code in Figure 6.5 which overrides three of the Http Pipeline’s methods. 

Listing 6.5 – Adding POST support to the SimpleWorkerRequest subclassbyte[] PostData = null;string PostContentType = "application/x-www-form-urlencoded";…public override String GetHttpVerbName() {    if (this.PostData == null)          return base.GetHttpVerbName();     return "POST";} public override string GetKnownRequestHeader(int index){    If (index == HttpWorkerRequest.HeaderContentLength)    {          if (this.PostData != null)                return this.PostData.Length.ToString();    }    else if (index == HttpWorkerRequest.HeaderContentType)     {          if (this.PostData != null)

Page 42: Application Architecture For

                return this.PostContentType;    }    return  base.GetKnownRequestHeader (index); } public override byte[] GetPreloadedEntityBody() {    if (this.PostData != null)          return this.PostData;     return base.GetPreloadedEntityBody();}

 The value of PostData and PostContentType is passed down from the wwAspRuntimeProxy and ultimately from the wwAspRuntimeHost classes which both include these properties as well. Post operations can then be performed using raw post buffers like this: this.oHost.AddPostBuffer("Company=West+Wind&Name=Rick+Strahl");

 The AddPostBuffer method supports both string and binary input (byte[]) and has an additional optional content type parameter which defaults to Urlencoded form content. Note that you must provide a raw POST buffer, which in the case of UrlEncode content looks like the snippet above. Other content types like multi-part and XML can be posted in raw form. Once you’ve added this POST buffer to the application you can now retrieve that info using Request.Form inside of the ASP.Net page: <%= Request.Form["Company"] %>

 This is useful for a number of reason. First you can use this mechanism to pass complex parameters and XML to the application and use a standard mechanism to retrieve the input. If you have Web Forms you can use the ID values of controls that you want to set with specific values for example. So, the txtCompany form variable would update the Web Control with the ID txtCompany. But secondly and more importantly this mechanism allows you to capture the POST buffer from a browser session in IE. So you can display a form using this mechanism in IE and then click the submit button, capture the BeforeNavigate2() event, grab the POST buffer and post it right back to the ASP.Net runtime. BeforeNavigate2 provides the URL and the POST buffer so you have all the info you need to provide a POST back to the ASP.Net page. This makes it possible to create applications that use all of ASP.Net’s features, but entirely without any Web Server at all. How’s that for cool?

Configuration

Page 43: Application Architecture For

Setting up ASP.Net scripting is pretty straight forward. But you can configure the application handling even more by using a web.config file.

Listing 7 (config.web): Configuring the ASP.Net environment with a configuration file<configuration> <system.web> <compilation debug="true"/> <httpHandlers> <add verb="*" path="*.wcs" type="System.Web.UI.PageHandlerFactory"/> <add verb="*" path="*.htm" type="System.Web.UI.PageHandlerFactory"/> </httpHandlers> </system.web></configuration>

There are two extremely useful settings that you can make. First you should set debug to true to allow you to debug your scripts. If you have this setting in your application you can debug your scripts right along with your application. Simply open the script in the VS environment set a breakpoint in the script, then run the application, hit the script and voila your script code can be debugged with all of the VS debugging features.

If you don't have an existing VS project you can still use the debugger against the Executable with:

<VS Path>\devenv /debugexe RichUserInterface.exe

Let VS create a new solution for you, open the page to debug, set a breakpoint and off you go. This is a very cool feature as you can offer this feature to your customers as well, so they can more easily debug their scripts. Along the same lines you can use the VS editor to edit your scripts as well, although you should try and stay away from all of the Web form related stuff as that's meant for server side development. You can implement this but frankly you'll be much better off dealing with these sort of issues in your regular application code.

When you build template based applications you might prefer to extensions other than ASPX for your scripts and you can do this by adding httpHandlers into the Config.Web file as shown above. Set each extension to the same System.Web.UI.PageHandlerFactory as ASPX files (set in machine.config) are sent and you can then process files with those extensions through the scripting runtime. Unlike ASP.Net you don't need scriptmaps to make this work because we're in control of the HttpRuntime here locally.

Runtime timeouts

So far I've shown the workings of the wwAspRuntimeProxy class and how it implements the code. You have to remember that the proxy is a remote object reference with all of its related issues. A remote reference is a proxy and if something happens to the remote object – it crashes unrecoverably or times out – the reference goes away. It's difficult to

Page 44: Application Architecture For

capture this sort of error because any access to the object could cause an error at this point.

There are two issues here: Lifetime and error handling. Remote objects have a limited lifetime of 5 minutes by default. After 5 minutes remote object references are released regardless of whether the object has been called in the meantime. When I first ran into this I couldn't quite figure out what was happening – after 5 minutes or all of the sudden the pointers to the proxy where gone. It's difficult to detect this failure, because the reference the client holds is not Null, so you can't simply check for a non-Null value. The only way to detect this is with an exception handler, but wrapping every access to the Proxy into an exception handler isn't a good option from a code perspective and it doesn't allow for automatic recovery. My inititial workaround was to create a wrapper class, that simply makes passthrough calls to the Proxy object. This is the main wwAspRuntimeHost class which wraps the calls to the Proxy into Exception handling blocks. Specifically each call to ProcessRequest() first checks to see if a property on the proxy is accessible and if it is not, it tries to reload the runtime automatically by calling the Start() method. Listing 7.5 shows the implementation of the wrapped ProcessRequest method. The first try/catch block performs the auto-restart of the runtime.

Listing 7.5 (AspRuntimeHost.cs): The proxy wrapper code can catch proxy errorspublic virtual bool ProcessRequest(string Page, string QueryString) {

this.cErrorMsg = "";this.bError = false;

try {string lcPath = this.oHost.cOutputFile;

}catch(Exception ) {

if (!this.Start())return false;

}

bool llResult = false;try {

this.oHost.ParameterData = this.ParameterData;llResult = this.oHost.ProcessRequest(Page,QueryString);

}catch(Exception ex) {

this.cErrorMsg = ex.Message;this.bError = true;return false;

}

return llResult;}

Page 45: Application Architecture For

The wrapper also simplifies the interface of the class by not using static members and setting properties of the assigned values internally, which makes all the information set more readily available to the calling application (See Listing 1). It also hides some of the static worker methods, so the developer method interface is much cleaner and easier to use resulting in much less code. Although I found a solution to my timeout problem creating this wrapper was definitely worthwhile.

The timeout problem turns out to be related to remote object 'Lease'. The InitialLease for a remote object is for 5 minutes after which the object is release regardless of access. There are a number of ways to override this, but generically the easiest way to do it is to override the InitializeLifetimeService() method of the proxy object. To do this I added the code shown in Listing 7.7 which sets the value to a more reasonable number of minutes (15 by default) and allows the object to restart counting its Lease when a hit occurs.

Listing 7.7 (AspRuntimeHost.cs): Setting  the lifetime of the Proxy objectusing System.Runtime.Remoting.Lifetime;…// implement on wwAspRuntimeProxy classpublic override Object InitializeLifetimeService(){

ILease lease = (ILease)base.InitializeLifetimeService();

if (lease.CurrentState == LeaseState.Initial){ lease.InitialLeaseTime =

TimeSpan.FromMinutes(wwAspRuntimeProxy.nIdleTimeoutMinutes);

lease.RenewOnCallTime = TimeSpan.FromMinutes(wwAspRuntimeProxy.nIdleTimeoutMinutes); lease.SponsorshipTimeout = TimeSpan.FromMinutes(5);

}

return lease;}

nIdleTimeoutMinutes is a private static member of the wwAspRuntimeProxy class and can't be set at runtime – you have to set this on the property, but it defaults to a reasonable value of 15 minutes that you can manually override on the class if necessary. The RenewOnCallTime property automatically causes the Lease to be renewed for the amount specified everytime a hit occurs which should be plenty of time. And if the runtime still should time out for some reason it will automatically reload because of the wrapper in wwAspRuntimeHost.

Running under ASP.Net

Sometimes you may find it also useful to run a dynamic page under ASP.Net. For example, if you need to create an email confirmation for an order or you need to send a

Page 46: Application Architecture For

form letter to an client, you can do so by using an ASPX page to provide the 'textmerge' mechanism to perform this task. Even though you are already running under ASP.Net this process still requires creating a new Application domain and using the same mechanisms described here. There are a couple of extra considerations you need to think about. In order to do so you will have to explicitly provide the cApplicationBase parameter of the wwAspRuntimeHost class and point it at the Web application's BIN directory or wherever the DLL/EXE is housed that contains the wwAspRuntimeHost class. loHost.cPhysicalDirectory = Server.MapPath("/wwWebStore") + "\\";loHost.cApplicationBase = loHost.cPhysicalDirectory + "bin\\";

 Your other option in this regard is to create a separate DLL for the wwAspRuntimeHost classes, sign them and move the file to the Global Assembly Cache. By doing this ASP.Net can always find the DLL without extra hints and continue to load it. Additionally you will need to have rights for the account that is running the ASP.Net application to be able to create a new application domain and create temporary files in the ASP.Net application cache where the compiled code is housed.

An example: Assembly DocumentationAs an example how you can utilize the functionality of the wwAspRuntimeHost class and the ASP.Net runtime I created a small sample application that shows the content of assemblies by generating HTML output from the properties and methods as well as importing all the documentation from the XML comments if available. XML comments are available for C# applications that have an XML documentation export file set at Compile time. The sample will pick up a matching XML file to the assembly imported and parse out the documentation to the matching properties and generate HTML. The application acts as a viewer for the class hierarchy as well as the individual members of each class and you can export the entire thing into HTML pages to disk. Figure 4 shows an example of the running application that uses the Web Browser control to display each topic.

Page 47: Application Architecture For

Figure 4 – A sample application that demonstrates how to use the ASP.Net runtime to build a local application that uses HTML content for display in a desktop application.

The application works by selecting an assembly (a DLL or EXE) for import which fills the Treeview on the left with data from the class. A while back I built a class called wwReflection that wraps the various type retrieval interfaces from Reflection to build an easy and COM exportable interface to export class information from assemblies. This class also performs post parsing of the retrieved values such as retrieving the XML documentation and formatting parameters and other text into formats suitable for display as documentation. The detailed method info in Figure 4 demonstrates some of the parsed information is available.

The wwAspRuntimeHost class is used in this sample to display each of the help topics for each method, property and class by running a specific HTML template using ASP.Net code. Classes, methods and properties each have a separate template – ClassHeader.aspx, ClassMethod.aspx and ClassMethod.aspx respectively which are passed an object that contains the relevant information for display.

All the member parsing for classes, methods and properties is handled by wwReflection which pulls all members into local properties of objects. These local properties have fixed up information contained within them that include minimal formatting for display of member information such as parameters etc. The TypeParser has an array of aObjects, each object has an array of aMethods and aProperties and so on.

Page 48: Application Architecture For

The class behavior is rather simple and demonstrated by the LoadTree() method of the ClassDocs form as shown in Listing 8.

Listing 8 (ClassDocs.cs): Using wwReflection to populate the treeview with class infoprivate void LoadTree(string lcAssembly) {

FileInfo loFileInfo = new FileInfo(lcAssembly);

TypeParser loParser = this.oParser; loParser.cFilename = lcAssembly;this.txtName.Text = "Assembly Documentation " + loFileInfo.Name;

//*** If exists add XML documentationloParser.cXmlFilename =

wwUtils.ForceExtension(lcAssembly,"xml");

this.oParser.GetAllObjects(); // parse all objects

this.oList.Nodes.Clear();for (int x = 0; x < loParser.nObjectCount; x++) {

DotnetObject loObject = this.oParser.aObjects[x];

TreeNode loNode = new TreeNode(loObject.cName);loNode.ImageIndex=0;loNode.SelectedImageIndex = 0;

` loNode.Tag = x.ToString() + ":-1" ;

this.oList.Nodes.Add(loNode);

//*** Add Methodsfor (int y=0; y < loObject.nMethodCount; y++) {

ObjectMethod loMethod = loObject.aMethods[y];

TreeNode loNode2 = new TreeNode(loMethod.cName);loNode2.ImageIndex = 1;loNode2.SelectedImageIndex = 1;loNode2.Tag = x.ToString() + ":" + y.ToString();

loNode.Nodes.Add(loNode2);}

//*** Add Propertiesfor (int y=0; y < loObject.nPropertyCount; y++)

{ObjectProperty loProperty = loObject.aProperties[y];

TreeNode loNode3 = new TreeNode(loProperty.cName);loNode3.ImageIndex = 2;loNode3.SelectedImageIndex = 2;loNode3.Tag = x.ToString() + ":" + y.ToString();loNode.Nodes.Add(loNode3);

}

Page 49: Application Architecture For

}this.oStatus.Panels[0].Text = "Loaded Assembly: " + lcAssembly;this.cCurrentAssembly = lcAssembly;

}

The key features here are the oParser.GetAllObjects() method which parses the entire assembly into internal properties of the TypeParser class. aObjects[] is populated with all classes and the aMethods and aProperties and aEvents members all are filled with the appropriate subobjects. Each of those arrays contains custom objects that contain all of the required class information. Note that each of these classes (defined in wwReflection.cs) is defined as MarshalByRefObject which is important in order to be passed to our script pages for rendering. Each of the node's tag property is set with a string key value pair that identifies the class indexer and the member indexer: 0:3 means class index 0 and member index 3 for example and this code is parsed out when a node is selected to retrieve the object and method to pass to the script page.

When the form loads the code initializes our wwAspRuntimeHost class with this code:

this.oHost = new wwAspRuntimeHost();this.oHost.cPhysicalDirectory = Directory.GetCurrentDirectory() + "\\WebDir\\";this.oHost.cVirtualPath = "/Documentation";this.oHost.cOutputFile = this.oHost.cPhysicalDirectory + "__preview.htm";this.oHost.Start();

which initializes and starts up the ASP.Net runtime. Each click on a treenode then fires code to call a specific template. The code in Listing 9 shows how the rendering of a class method is handled.

Listing 9 (ClassDocs.cs): Rendering a method through a script page// *** Methodsif (e.Node.ImageIndex == 1) {

// *** Multiple object by parameter by using a container objectDotnetObjectParameterData loParameter = new

DotnetObjectParameterData();loParameter.oObject = this.oParser.aObjects[lnClassIndex];loParameter.oMethod =

this.oParser.aObjects[lnClassIndex].aMethods[lnMemberIndex];loParameter.cTitle = this.txtName.Text;

this.oHost.ParameterData = loParameter;

this.oHost.ProcessRequest("ClassMethod.aspx",null);this.Navigate("file://" + this.oHost.cOutputFile);

}

The code first checks for which type of member is called based on the ImageIndex – image 1 is a method. It then creates a new DotnetObjectParameterData object which acts

Page 50: Application Architecture For

as a parameter container for a number of other objects that we want to pass to the script page. It's defined like this:

public class DotnetObjectParameterData : MarshalByRefObject {

public string cTitle = "Assembly Documentation";public DotnetObject oObject = null;public ObjectMethod oMethod = null;public ObjectProperty oProperty = null;public ObjectEvent oEvent = null;

}

Using this object we can assign properties appropriate for each type of member and pass the information using the wwAspRuntimeHost class' ParameterData 'parameter' object. Inside of the script page we can then pick up the parameter and use it within the page. For the method script the header/startup code for the script is shown in Listing 10. Note that this script is modified and truncated significantly for brevity and clarity.

Listing 10 (ClassMethod.aspx): Partial code from the method rendering script page.<%@assembly name="AspNetHosting"%><%@import namespace="AspNetHosting"%><%@import namespace="Westwind.wwReflection"%>

<script runat="server" language="C#"> DotnetObject oObject = null; ObjectMethod oMethod = null; string cTitle = "";</script><% DotnetObjectParameterData loData = (DotnetObjectParameterData) this.Context.Items["Content"];

this.oObject = loData.oObject;this.oMethod = loData.oMethod;this.cTitle = loData.cTitle;%><html><head><title><%= this.oMethod.cName %></title>…<h2><%= this.oObject.cName %> :: <%= this.oMethod.cName %></h2><p><%= this.oMethod.cHelpText %><p>…<table …>…<tr> <td width="100" valign="top" align="right" class="labels"> <p align="right">Syntax: </td>

Page 51: Application Architecture For

<td bgColor="#eeeeee" style="font: bold bold 10pt 'Courier New'"> <b>o.<%= this.oMethod.cName%>(<%= this.oMethod.cParameters %>)</b></td></tr></table>

The assembly and namespace references are required to allow use of the classes imported. In this case the RichUserInterface.exe contains all classes referenced but with most real world solutions you'll likely have several assemblies that you need to reference. Next the code assigns a couple of local object references that have been set up for the script page class – oObject and oMethod. These are assigned from the Context item with:

DotnetObjectParameterData loData = (DotnetObjectParameterData) this.Context.Items["Content"];

This is the parameter that was passed the oHost.ParameterData member on the form and is now available to our script page. Once we have a reference to this object we can cast it and simply retrieve the properties we're interested in which are the object, method and project title.

To display each of these values we can now simply embed expressions into the ASP.Net page – such as <%= this.oMethod.cHelpText %>.

The net effect of all of this is that we can use our business object in the desktop application to perform all logic and use only a couple of lines of code in the script to retrieve the appropriate parameter data and then retrieve the data to display; clear separation of the user interface layer and the business logic layer while providing an attractive, extensible and configurable user display for the user.

Because we are dealing with ASP.Net here we can also utilize the full power of scripting in our pages. For example, we could have the Class page utilize the oParser instance to run through all methods and properties and generate a simple table display summary by embedding the script. Using script provides a lot of flexibility for this sort of functionality.

In addition you can make your HTML display interface somewhat interactive by handling hyperlinks in the HTML display to fire actions in your application by using the BeforeNavigate2 event of the WebBrowser control. But that's a subject for a future article (as there are some problems with the COM imported WebBrowser control).

Scripts awayI hope this article has given you a better idea of how you can utilize dynamic content in your desktop applications. The ability to externalize your user interface into templates provides a powerful mechanism for creating rich user interfaces and provides the customization and extensibility that make your applications attractive to power users. Building display interfaces in HTML gives you display flexibility that you simply do not

Page 52: Application Architecture For

have with regular form controls. It's so much easier to build an engaging visual page including images and color with HTML than is possible with form controls. At the same time you can continue to use Windows Forms controls where they make most sense – for data entry and validation where HTML is sorely lacking. For example, the real world application that the assembly documentation sample is based on uses templates for displaying help topic content and rendering the final HTML output that gets compiled into HTML Help files, while all the editing of the topic data is accomplished through a standard tab based Windows Form interface. You get the instant real-time preview while still having a traditional and structured data entry mechanism.

There are many more uses for scripting with ASP.Net – it's a tremendously powerful mechanism because you can basically create anything from batch scripts to code generators directly with the engine. So what's your next script?

fixed StatementPrevents relocation of a variable by the garbage collector. It takes the following form:

fixed ( type* ptr = expr ) statement

where:

type

An unmanaged type or void.

ptr

A pointer name.

expr

An expression that is implicitly convertible to type*.

statement

Executable statement or block.

Remarks

The fixed statement is only permitted in an unsafe context.

The fixed statement sets a pointer to a managed variable and "pins" that variable during the

execution of statement. Without fixed, pointers to managed variables would be of little use since

garbage collection could relocate the variables unpredictably. (In fact, the C# compiler will not allow

you to set a pointer to a managed variable except in a fixed statement.)

Copy Code

// assume class Point { public int x, y; } Point pt = new Point(); // pt is a managed

variable, subject to g.c. fixed ( int* p = &pt.x ){ // must use fixed to get address of

pt.x and *p = 1; // pin pt in place while we use the pointer }

The fixed statement sets a pointer to a managed variable and “pins” that variable during the execution of statement.

Page 53: Application Architecture For

Without fixed, pointers to managed variables would be of little use since garbage collection could relocate the variables unpredictably. (In fact, the C# compiler will not allow you to set a pointer to a managed variable except in a fixed statement.)Eg:

Class A { public int i; }A objA = new A; // A is a .net managed typefixed(int *pt = &objA.i) // use fixed while using pointers with managed// variables{*pt=45; // in this block use the pointer the way u want}

lock StatementThe lock keyword marks a statement block as a critical section by obtaining the mutual-exclusion

lock for a given object, executing a statement, and then releasing the lock. This statement takes the

following form:

lock(expression) statement_block

where:

expression

Specifies the object that you want to lock on. expression must be a reference type.

Typically, expression will either be this, if you want to protect an instance variable, or

typeof(class), if you want to protect a static variable (or if the critical section occurs in a

static method in the given class).

statement_block

The statements of the critical section.

Remarks

lock ensures that one thread does not enter a critical section while another thread is in the critical

section of code. If another thread attempts to enter a locked code, it will wait (block) until the object

is released.

8.12 The lock statement discusses lock.

Example 1

The following sample shows a simple use of threads in C#.

Copy Code

// statements_lock.cs using System; using System.Threading; class ThreadTest

{ public void runme() { Console.WriteLine("runme called"); } public static void

Main() { ThreadTest b = new ThreadTest(); Thread t = new Thread(new

ThreadStart(b.runme)); t.Start(); } }

Page 54: Application Architecture For

Output

Copy Code

runme called

1. What standard types does C# use?

C# supports a very similar range of basic types to C++, including int, long, float, double, char, string, arrays, structs and classes. In C# Types The names may be familiar, but many of the details are different. For example, a long is 64 bits in C#, whereas in C++ the size of a long depends on the platform (typically 32 bits on a 32-bit platform, 64 bits on a 64-bit platform). Also classes and structs are almost the same in C++ - this is not true for C#. Finally, chars and strings in .NET are 16-bit (Unicode/UTF-16), not 8-bit like C++.

2.What is the syntax to inherit from a class in C#?

Place a colon and then the name of the base class.

Example: class DerivedClassName: BaseClassName

3.How can I make sure my C# classes will interoperate with other .Net languages?

Make sure your C# code conforms to the Common Language Subset (CLS). To help with this, add the [assembly: CLSCompliant (true)] global attribute to your C# source files. The compiler will emit an error if you use a C# feature which is not CLS-compliant.

4.Does C# support variable argument on method?

The params keyword can be applied on a method parameter that is an array. When the method is invoked, the elements of the array can be supplied as a comma separated list.So, if the method parameter is an object array,

void paramsExample(object arg1, object arg2, params object[] argsRest)

{ foreach (object arg in argsRest)

{

/* …. */

}

}

then the method can be invoked with any number of arguments of any type.paramsExample(1, 0.0f, “a string”, 0.0m, new UserDefinedType());

Page 55: Application Architecture For

5.What’s the difference between const and readonly?

Readonly fields are delayed initalized constants. However they have one more thing different is that; When we declare a field as const it is treated as a static field. where as the Readonly fields are treated as normal class variables.const keyword used ,when u want’s value constant at compile time but in case of readonly ,value constant at run timeForm the use point of view if we want a field that can have differnet values between differnet objects of same class, however the value of the field should not change for the life span of object; We should choose the Read Only fields rather than constants.Since the constants have the same value accross all the objects of the same class; they are treated as static.

6.What is the difference about Switch statement in C#?

No fall-throughs allowed. Unlike the C++ switch statement, C# does not support an explicit fall through from one case label to another. If you want, you can use goto a switch-case, or goto default.

case 1:

cost += 25;

break;

case 2:

cost += 25;

goto case 1;

7. What is the difference between a static and an instance constructor?

An instance constructor implements code to initialize the instance of the class. A static constructor implements code to initialize the class itself when it is first loaded.

8. Assume that a class, Class1, has both instance and static constructors. Given the code below, how many times will the static and instance constructors fire?

Class1 c1 = new Class1();

Class1 c2 = new Class1();

Class1 c3 = new Class1();

By definition, a static constructor is fired only once when the class is loaded. An instance constructor on the other hand is fired each time the class is instantiated. So, in the code given above, the static constructor will fire once and the instance constructor will fire three times.

9. In which cases you use override and new base?

Page 56: Application Architecture For

Use the new modifier to explicitly hide a member inherited from a base class. To hide an inherited member, declare it in the derived class using the same name, and modify it with the new modifier.

10.You have one base class virtual function how will you call the function from derived class?

class a

{

public virtual int m()

{

return 1;

}

}

class b:a

{

public int j()

{

return m();

}

}

11. Can we call a base class method without creating instance?

It is possible if it’s a static method.

It is possible by inheriting from that class also.It is possible from derived classes using base keyword.

12. What is Method Overriding? How to override a function in C#?

Method overriding is a feature that allows you to invoke functions (that have the same signatures) that belong to different classes in the same hierarchy of inheritance using the base class reference. C# makes use of two keywords: virtual and overrides to accomplish Method overriding. Let’s understand this through small examples.

P1.cs

Page 57: Application Architecture For

class BC

{

public void Display()

{

System.Console.WriteLine(”BC::Display”);

}

}

class DC : BC

{

new public void Display()

{

System.Console.WriteLine(”DC::Display”);

}

}

class Demo

{

public static void

Main()

{

BC b;

b = new BC();

b.Display();

}

}

Output : BC::Display

Page 58: Application Architecture For

13. What is an Abstract Class?

A class that cannot be instantiated. An abstract class is a class that must be inherited and have the methods overridden. An abstract class is essentially a blueprint for a class without any implementation.

14.When do you absolutely have to declare a class as abstract?

1. When the class itself is inherited from an abstract class, but not all base abstract methods have been overridden.

2. When at least one of the methods in the class is abstract.

15. What is an interface class?

Interfaces, like classes, define a set of properties, methods, and events. But unlike classes, interfaces do not provide implementation. They are implemented by classes, and defined as separate entities from classes.

16.Can you inherit multiple interfaces?

Yes. .NET does support multiple interfaces.

17.What happens if you inherit multiple interfaces and they have conflicting method names?

It’s up to you to implement the method inside your own class, so implementation is left entirely up to you. This might cause a problem on a higher-level scale if similarly named methods from different interfaces expect different data, but as far as compiler cares you’re okay.

18. What’s the difference between an interface and abstract class?

In an interface class, all methods are abstract - there is no implementation. In an abstract class some methods can be concrete. In an interface class, no accessibility modifiers are allowed. An abstract class may have accessibility modifiers.

19. Why can’t you specify the accessibility modifier for methods inside the interface?

They all must be public, and are therefore public by default.

20. Describe the accessibility modifier “protected internal”.

It is available to classes that are within the same assembly and derived from the specified base class.

21. If a base class has a number of overloaded constructors and an inheriting class has a number of overloaded constructors; can you enforce a call from an inherited constructor to specific base constructor?

Page 59: Application Architecture For

Yes, just place a colon, and then keyword base (parameter list to invoke the appropriate constructor) in the overloaded constructor definition inside the inherited class.

22. What are the different ways a method can be overloaded?

Different parameter data types, different number of parameters, different order of parameters.

23. How do you mark a method obsolete?

[Obsolete]

public int Foo()

{…}

or

[Obsolete(\”This is a message describing why this method is obsolete\”)]

public int Foo()

{…}

24. What is a sealed class?

It is a class, which cannot be subclassed. It is a good practice to mark your classes as sealed, if you do not intend them to be subclassed.

25. How do you prevent a class from being inherited?

Mark it as sealed.

26. Can you inherit from multiple base classes in C#?

No. C# does not support multiple inheritance, so you cannot inherit from more than one base class. You can however, implement multiple interfaces.

27. What is an indexer in C#?

The indexers are usually known as smart arrays in C# community. Defining a C# indexer is much like defining properties. We can say that an indexer is a member that enables an object to be indexed in the same way as an array.

<modifier> <return type> this [argument list]

{

get

Page 60: Application Architecture For

{

// Get codes goes here

}

set

{

// Set codes goes here

}

}

Where the modifier can be private, public, protected or internal. The return type can be any valid C# types. The ‘this’ is a special keyword in C# to indicate the object of the current class. The formal-argument-list specifies the parameters of the indexer.

28. What is the use of fixed statement?

The fixed statement sets a pointer to a managed variable and “pins” that variable during the execution of statement.

Without fixed, pointers to managed variables would be of little use since garbage collection could relocate the variables unpredictably. (In fact, the C# compiler will not allow you to set a pointer to a managed variable except in a fixed statement.)

Eg:Class A

{

public int i;

}

A objA = new A; // A is a .net managed type

fixed(int *pt = &objA.i) // use fixed while using pointers with managed

// variables

{

*pt=45; // in this block use the pointer the way u want

}

29. What is the order of destructors called in a polymorphism hierarchy?

Page 61: Application Architecture For

Ans.Destructors are called in reverse order of constructors. First destructor of most derived class is called followed by its parent’s destructor and so on till the topmost class in the hierarchy.

You don’t have control over when the first destructor will be called, since it is determined by the garbage collector. Sometime after the object goes out of scope GC calls the destructor, then its parent’s destructor and so on.

When a program terminates definitely all object’s destructors are called.

30. What is a virtual method?

Ans.In C#, virtual keyword can be used to mark a property or method to make it overrideable. Such methods/properties are called virtual methods/properties.By default, methods and properties in C# are non-virtual.

31. Is it possible to Override Private Virtual methods?

No, First of all you cannot declare a method as ‘private virtual’.

32. Can I call a virtual method from a constructor/destructor?

Yes, but it’s generally not a good idea. The mechanics of object construction in .NET are quite different from C++, and this affects virtual method calls in constructors.C++ constructs objects from base to derived, so when the base constructor is executing the object is effectively a base object, and virtual method calls are routed to the base class implementation. By contrast, in .NET the derived constructor is executed first, which means the object is always a derived object and virtual method calls are always routed to the derived implementation. (Note that the C# compiler inserts a call to the base class constructor at the start of the derived constructor, thus preserving standard OO semantics by creating the illusion that the base constructor is executed first.)The same issue arises when calling virtual methods from C# destructors. A virtual method call in a base destructor will be routed to the derived implementation.

33. How do I declare a pure virtual function in C#?

Use the abstract modifier on the method. The class must also be marked as abstract (naturally). Note that abstract methods cannot have an implementation (unlike pure virtual C++ methods).

34. Are all methods virtual in C#?

No. Like C++, methods are non-virtual by default, but can be marked as virtual.

35. What is the difference between shadow and override?

When you define a class that inherits from a base class, you sometimes want to redefine one or more of the base class elements in the derived class. Shadowing and overriding are both available for this purpose.

Comparison

Page 62: Application Architecture For

It is easy to confuse shadowing with overriding. Both are used when a derived class inherits from a base class, and both redefine one declared element with another. But there are significant differences between the two.The following table compares shadowing with overriding.

Point of comparison Shadowing OverridingPurposeShadowing

Protecting against a subsequent base class modification that introduces a member you have already defined in your derived classAchieving polymorphism by defining a different implementation of a procedure or property with the same calling sequence1Redefined elementShadowing

Any declared element typeOnly a procedure (Function, Sub, or Operator) or propertyRedefining elementShadowing

Any declared element typeOnly a procedure or property with the identical calling sequence1Access level of redefining elementShadowing

Any access levelCannot change access level of overridden elementReadability and writability of redefining elementShadowing

Any combinationCannot change readability or writability of overridden propertyControl over redefiningShadowing

Base class element cannot enforce or prohibit shadowingBase class element can specify MustOverride, NotOverridable, or OverridableKeyword usageShadowing

Shadows recommended in derived class; Shadows assumed if neither Shadows nor Overrides specified2Overridable or MustOverride required in base class; Overrides required in derived classInheritance of redefining element by classes deriving from your derived classShadowing

Shadowing element inherited by further derived classes; shadowed element still hidden3Overriding element inherited by further derived classes; overridden element still overridden

1 The calling sequence consists of the element type (Function, Sub, Operator, or Property), name, parameter list, and return type. You cannot override a procedure with a property, or the other way around. You cannot override one kind of procedure (Function, Sub, or Operator) with another kind.

2 If you do not specify either Shadows or Overrides, the compiler issues a warning message to help you be sure which kind of redefinition you want to use. If you ignore the warning, the shadowing mechanism is used.

3 If the shadowing element is inaccessible in a further derived class, shadowing is not inherited. For example, if you declare the shadowing element as Private, a class deriving from your derived class inherits the original element instead of the shadowing element.

36. Should I make my destructor virtual?

Page 63: Application Architecture For

A C# destructor is really just an override of the System.Object Finalize method, and so is virtual by definition

37. Are C# destructors the same as C++ destructors?

No. They look the same but they are very different. The C# destructor syntax (with the familiar ~ character) is just syntactic sugar for an override of the System.Object Finalize method. This Finalize method is called by the garbage collector when it determines that an object is no longer referenced, before it frees the memory associated with the object. So far this sounds like a C++ destructor. The difference is that the garbage collector makes no guarantees about when this procedure happens. Indeed, the algorithm employed by the CLR garbage collector means that it may be a long time after the application has finished with the object. This lack of certainty is often termed ‘non-deterministic finalization’, and it means that C# destructors are not suitable for releasing scarce resources such as database connections, file handles etc.To achieve deterministic destruction, a class must offer a method to be used for the purpose. The standard approach is for the class to implement the IDisposable interface. The user of the object must call the Dispose() method when it has finished with the object. C# offers the ‘using’ construct to make this easier.

38. Are C# constructors the same as C++ constructors?

Very similar, but there are some significant differences. First, C# supports constructor chaining. This means one constructor can call another:class Person

{

public Person( string name, int age ) { … }

public Person( string name ) : this( name, 0 ) {}

public Person() : this( “”, 0 ) {}

}

Another difference is that virtual method calls within a constructor are routed to the most derived implementationError handling is also somewhat different. If an exception occurs during construction of a C# object, the destuctor (finalizer) will still be called. This is unlike C++ where the destructor is not called if construction is not completed.Finally, C# has static constructors. The static constructor for a class runs before the first instance of the class is created.Also note that (like C++) some C# developers prefer the factory method pattern over constructors.

39. Can you declare a C++ type destructor in C# like ~MyClass()?

Yes, but what’s the point, since it will call Finalize(), and Finalize() has no guarantees when the memory will be cleaned up, plus, it introduces additional load on the garbage collector. The only time the finalizer should be implemented, is when you’re dealing with unmanaged code.

40. What are the fundamental differences between value types and reference types?

Page 64: Application Architecture For

C# divides types into two categories - value types and reference types. Most of the intrinsic types (e.g. int, char) are value types. Structs are also value types. Reference types include classes, arrays and strings. The basic idea is straightforward - an instance of a value type represents the actual data, whereas an instance of a reference type represents a pointer or reference to the data.The most confusing aspect of this for C++ developers is that C# has predetermined which types are represented as values, and which are represented as references. A C++ developer expects to take responsibility for this decision.For example, in C++ we can do this:int x1 = 3; // x1 is a value on the stack

int *x2 = new int(3) // x2 is a pointer to a value on the heapbut in C# there is no control:int x1 = 3; // x1 is a value on the stack

int x2 = new int();

x2 = 3; // x2 is also a value on the stack!

41.How do you handle errors in VB.NET and C#?

C# and VB.NET use structured error handling (unlike VB6 and earlier versions where error handling was implemented using Goto statement). Error handling in both VB.NET and C# is implemented using Try..Catch..Finally construct (C# uses lower case construct – try…catch…finally).

42. What is the purpose of the finally block?

The code in finally block is guaranteed to run, irrespective of whether an error occurs or not. Critical portions of code, for example release of file handles or database connections, should be placed in the finally block.

43. Can I use exceptions in C#?

Yes, in fact exceptions are the recommended error-handling mechanism in C# (and in .NET in general). Most of the .NET framework classes use exceptions to signal errors.

44. Why is it a bad idea to throw your own exceptions?

Well, if at that point you know that an error has occurred, then why not write the proper code to handle that error instead of passing a new Exception object to the catch block? Throwing your own exceptions signifies some design flaws in the project.

45. What’s the C# syntax to catch any possible exception?

A catch block that catches the exception of type System. Exception. You can also omit the parameter data type in this case and just write catch {}

46. How to declare a two-dimensional array in C#?

Syntax for Two Dimensional Array in C Sharp is int[,] ArrayName;

Page 65: Application Architecture For

47.How can you sort the elements of the array in descending order?

Using Array.Sort() and Array.Reverse() methods.int[] arr = new int[3];

arr[0] = 4;

arr[1] = 1;

arr[2] = 5;

Array.Sort(arr);

Array.Reverse(arr);

48. What’s the difference between the System.Array.CopyTo() and System.Array.Clone()?

The Clone() method returns a new array (a shallow copy) object containing all the elements in the original array. The CopyTo() method copies the elements into another existing array. Both perform a shallow copy. A shallow copy means the contents (each array element) contains references to the same object as the elements in the original array. A deep copy (which neither of these methods performs) would create a new instance of each element’s object, resulting in a different, yet identacle object.

49. Structs are largely redundant in C++.Why does C# have them?

In C++, a struct and a class are pretty much the same thing. The only difference is the default visibility level (public for structs, private for classes). However, in C# structs and classes are very different. In C#, structs are value types (instances stored directly on the stack, or inline within heap-based objects), whereas classes are reference types (instances stored on the heap, accessed indirectly via a reference). Also structs cannot inherit from structs or classes, though they can implement interfaces. Structs cannot have destructors. A C# struct is much more like a C struct than a C++ struct.

50. How does one compare strings in C#?

In the past, you had to call .ToString() on the strings when using the == or != operators to compare the strings’ values. That will still work, but the C# compiler now automatically compares the values instead of the references when the == or != operators are used on string types. If you actually do want to compare references, it can be done as follows: if ((object) str1 == (object) str2) { … } Here’s an example showing how string compares work:using System;

public class StringTest

{

public static void Main(string[] args)

{

Page 66: Application Architecture For

Object nullObj = null; Object realObj = new StringTest();

int i = 10;

Console.WriteLine(\”Null Object is [\” + nullObj + \”]\n\”

+ \”Real Object is [\” + realObj + \”]\n\”

+ \”i is [\” + i + \”]\n\”);

// Show string equality operators

string str1 = \”foo\”;

string str2 = \”bar\”;

string str3 = \”bar\”;

Console.WriteLine(\”{0} == {1} ? {2}\”, str1, str2, str1 == str2 );

Console.WriteLine(\”{0} == {1} ? {2}\”, str2, str3, str2 == str3 );

}

}Output:Null Object is []

Real Object is [StringTest]

i is [10]

foo == bar ? False

bar == bar ? True

51. Where we can use DLL made in C#.Net?

Supporting .Net, because DLL made in C#.Net semi compiled version. It’s not a com object. It is used only in .Net Framework As it is to be compiled at runtime to byte code.

52. If A.equals(B) is true then A.getHashcode & B.gethashcode must always return same hash code.

The answer is False because it is given that A.equals(B) returns true i.e. objects are equal and now its hashCode is asked which is always independent of the fact that whether objects are equal or not. So, GetHashCode for both of the objects returns different value.

53.Is it possible to debug the classes written in other .Net languages in a C# project?

Page 67: Application Architecture For

It is definitely possible to debug other .Net languages code in a C# project. As everyone knows .net can combine code written in several .net languages into one single assembly. Same is true with debugging.

54. Does C# has its own class library?

Not exactly. The .NET Framework has a comprehensive class library, which C# can make use of. C# does not have its own class library.

55. IS it possible to have different access modifiers on the get/set methods of a property?

No. The access modifier on a property applies to both its get and set accessors. What you need to do if you want them to be different is make the property read-only (by only providing a get accessor) and create a private/internal set method that is separate from the property.

56. Is it possible to restrict the scope of a field/method of a class to the classes in the same namespace?

There is no way to restrict to a namespace. Namespaces are never units of protection. But if you’re using assemblies, you can use the ‘internal’ access modifier to restrict access to only within the assembly.

57. Is there an equivalent of exit() or quiting a C#.NET application?

Yes, you can use System.Environment.Exit(int exitCode) to exit the application or Application.Exit() if it’s a Windows Forms app.

58. What optimization does the C# compiler perform when you use the /optimize+compiler option?

The following is a response from a developer on the C# compiler team: We get rid of unused locals (i.e., locals that are never read, even if assigned). We get rid of unreachable code. We get rid of try-catch with an empty try. We get rid of try-finally with an empty try. We get rid of try-finally with an empty finally. We optimize branches over branches: gotoif A, lab1 goto lab2: lab1: turns into: gotoif !A, lab2 lab1: We optimize branches to ret, branches to next instruction, and branches to branches.

59. Does C# support multiple inheritance?

No, use interfaces instead.

60. IS goto statement supported in C#?How about Java?

Gotos are supported in C# to the fullest. In Java goto is a reserved keyword that provides absolutely no functionality.

61. What happens when you encounter a continue statement inside for loop?

Page 68: Application Architecture For

The code for the rest of the loop is ignored, the control is transferred back to the beginning of the loop.

62. Write one code example for compile time binding and one for run time binding?what is early/late binding?

An object is early bound when it is assigned to a variable declared to be of a specific object type . Early bound objects allow the compiler to allocate memory and perform other optimizations before an application executes.

‘ Create a variable to hold a new object.

Dim FS As FileStream

‘ Assign a new object to the variable.

FS = New FileStream(”C:\tmp.txt”, FileMode.Open)

By contrast, an object is late bound when it is assigned to a variable declared to be of type Object. Objects of this type can hold references to any object, but lack many of the advantages of early-bound objects.

Dim xlApp As Object

xlApp = CreateObject(”Excel.Application”)

What do you meant by static and dynamic modeling?Static modeling is used to specify structure of the objects that exist in the problem domain. These are expressed using class, object and USECASE diagrams.But Dynamic modeling refers representing the object interactions during runtime. It is represented by sequence, activity, collaboration and statechart diagrams

What is static memory allocation and dynamic memory allocation?

Static memory allocation: The compiler allocates the required memory space for a declared variable.By using the address of operator,the reserved address is obtained and this address may be assigned to a pointer variable.Since most of the declared variable have static memory,this way of assigning pointer value to a pointer variable is known as static memory allocation. memory is assigned during compilation time.

Dynamic memory allocation: It uses functions such as malloc( ) or calloc( ) to get memory dynamically.If these functions are used to get memory dynamically and the values returned by these functions are assingned to pointer variables, such assignments are known as dynamic memory allocation.memory is assined during run time.

Is it possible to Override Private Virtual methods.No, First of all you cannot declare a method as ‘private virtual’.

Page 69: Application Architecture For

What is the difference between shadow and override?Overriding is used to redefines only the methods, but shadowing redefines the entire element.

What does it meant to say “the canonical” form of XML?

The purpose of Canonical XML is to define a standard format for an XML document. Canonical XML is a very strict XML syntax, which lets documents in canonical XML be compared directly.Using this strict syntax makes it easier to see whether two XML documents are the same. For example, a section of text in one document might read Black & White, whereas the same section of text might read Black & White in another document, and even in another. If you compare those three documents byte by byte, they’ll be different. But if you write them all in canonical XML, which specifies every aspect of the syntax you can use, these three documents would all have the same version of this text (which would be Black & White) and could be compared without problem. This Comparison is especially critical when xml documents are digitally signed. The digital signal may be interpreted in different way and the document may be rejected.

What are Delegates

Delegates are just like function pointers in C++, except that they are much safer to use due to their type safety. A delegate defines a function without implementing it and another class then provides the implementation. Events in C# are based on delegates, with the originator defining one or more callback functions.