When it comes to encoding special characters with reference to URLs, we can easily make use of the functions that have been defined inside System.Web.dll assembly.

Although this is the most straight forward hassle free way to encode a string to make it compatible in a HTTP context, in some scenarios we don’t get that privilege automatically (e.g. non web applications).

I ran into such scenario when I was developing a SQL Server user defined function from VS 2010. 
The special problem in that occasion was that when you add an assembly reference in a SQL CLR project the reference is fetched from the GAC in data base server.
Microsoft have added a set of assemblies which they have identified as safe assemblies to be referenced by a T-SQL function to the GAC giving us the ability to use the functionality defined in those classes. But unfortunately System.Web.dll is not one of them.

So what will be the work around, an another function we can use for URL encoding can be found inside System.Uri name space, which fortunately resides inside System.dll.

OK, so how to use it to only encode URL special characters!!
Simple just use the power of “Regular Expressions”.

Here is the function I wrote.

private static string EncodeUrl(string toEncode)
  {
     const string pattern = "[$&+,/:;=?@]";
     var match = Regex.Match(toEncode, pattern);
     while(match.Success)
     {
        toEncode = toEncode.Replace(match.Value, Uri.EscapeDataString(match.Value));
        match = Regex.Match(toEncode, pattern);
     }
     return toEncode;
  }

The function checks the passed string repeatedly until all  special characters are replaced with their respective encoded values.

The combination of Regex.Match and string Replace functions make sure that only the characters that are defined in the regular expression pattern are replaced. 






Invariance is a concept that can be seen in almost all the programming languages that have a type system. And before we delve in to more language specific constructs, let’s see what all this big words mean when it comes to a language.

Covariance is the ability to convert data from wider to narrow data types. (E.g. long to int).
Contra variance is the ability to convert data from narrow to wider data types. (E.g. int to float).
Invariance is the inability to do conversion between data types.

In other words if we read closely in between the lines you’ll see that covariance preserves assignment compatibility while contra variance reverses it, and when both covariance and contra variance work together in tandem they make sure that invariance is avoided when needed.

Ok, with that theory part covered let’s look at an example, where which can expose the invariant behavior of C#
And the example I’ll be using will be a scenario where we will implement a generic interface for an imaginary inventory.

Let’s have a look at the code first































The IStorage interface defines two generic methods that facilitate storing and retrieving items from a generic List, and as you can see since there are no restrictions applied for type “T” in the implementing class BasicStorage or in the interface, any kind of data can be stored in our List repository. Let’s write some code to use this inventory of ours.












In the above code I’m using the generic interface as the base type to access the implemented functionality because the class has implemented the methods explicitly in its code.
And when the above code is written all will work fine just the way we planned.
Ok, now what will happen if we try to store data of type object in our repository, to do that we will have to convert our inventory01 variable to an instance of type IStorage<object>
And since all strings are objects and object is the base type for all types we should be able to do just that in our code. So let’s write some code to do that.


Hmm…
Although it makes sense to convert a string to an object the .net framework is throwing an error when we try to do that, but why?
If you look at it closely you will see that although all strings are objects, the converse is not true, all objects are not strings.
If the above restriction was not applied by the framework you would have been able to does something like this.





As you can see this code will try to store an ArrayList on a memory location which is structured to store a string, and it will clearly mess up the type safety of the .net framework.
This is the invariant behavior of .net. C# adds invariance to the type system to ensure type safety in our programming constructs.

According to the language specification, In C# Generic Interfaces are Invariant by default while Generic Classes are always Invariant.

With that in mind, let’s have a look at the same example from a deferent perspective.
This time, let’s use two deferent interfaces to store and retrieve data.


































Now IDepostior have the storage facility while IRetriever provides us with reading functionality
The code that uses this new structure will look like this.














Here we cast an instance of “Storage<string> inventory02” to typeIDepostior<string> store” to store data, while we cast the sameinventory02” object toIRetriever<string> retriever” to retrieve data.
With that said, will the following line of code work now?

    Storage<string> inventory02 = new Storage<string>();
   IRetriever<string> retreiver = inventory02;
   IRetriever<object> objectRetreivor = inventory02;

The answer is “NO”.
In the previous example it made sense. Does it make the same sense now?
Here IRetriever interface only provides you with means to read data, and it doesn’t expose any method to store data. Because of that, you cannot store incompatible types in the underlying storage now. So the loop hole that allowed the previous example to break the type safety is no longer present in this interface. In other words, now it’s perfectly safe to convert IRetriever<string> to IRetriever<object>.

In situations like this, where the type parameter only acts as a return value of a method in a generic interface, we can tell to the compiler that some implicit conversions are legal and type safety does not have to be imposed on a type all the time. To do that we can use the “out” keyword.

And the modified IRetriever interface will look something like this.







Now we have just used Covariance to preserve assignment compatibility where it makes sense.

Right! Now let’s have a look at the other interface which allows us to store data. Will this be ok?

   Storage<object> inventory03 = new Storage<object>();
   IDepostior<string> depositor = inventory03;

Let’s think of it like this. All strings are objects, so if you can perform a specific behavior on a variable of type object, you should be able to carry out the same thing on a variable of type string.
In other words, if “B” derives from “A” and if type “A” exposes a set of members (behaviors, properties, etc…), type “B” must also expose the same set of members too.
So anything that can be carried out on A must also be supported by B.
So if we consider our IDepostior example, if we can store objects and do some exercises on them, we should be able to store strings and do the same set of exercise on them too.
That sounds correct, but there must be a way for us to tell to the complier that operations which will be carried out on the generic typed items will only be operations which are specified on the most generalized type in the inheritance hierarchy. Or in other words only the operations supported by objects will be carried out on our type.
We do that using the keyword “in”.

The “in” keyword tells the compiler that, you can either pass type “T” as a parameter type to methods or pass any type that derives from “T”.  And you cannot use “T” as a return type of a method.

So the modified interface will look something like this.






Now we can reference an object either through a generic interface based on the object type or through a type that derived from that object type. And by adding the “in” key word, we’ve had made sure that the needed restrictions are there to make our assignment type safe.

With that, we have just used Contra variance to reverse Covariance and made classes in an inheritance hierarchy reference each other when the reference is type safe.

The completed interfaces will be…

































If you have any questions, just add a comment and I’ll answer it as soon as I can.

Hope this clears thing up.


I will be breaking down the post in to two sections.
First I will mention how to accomplish above mentioned task using handlers and I will then go on and discuss the underlying concepts and basic theories for the beginners in the next section.


How to get it done!!!


What are the core parts we will need to perform a cross domain request using JavaScript?

  •     A Mechanism to call the URL that sends data to the calling page.
  •     A padded JSON object (or a JSONP object)
    •    If the calling URL don’t support cross domain calls, functionality to wrap or pad the returned JSON object with a call back function.
  •    A method to handle the call back.
OK! Let’s take this one step at a time.
In this example I will be using JQuery AJAX facilities to call the URL in the other domain.


jQuery.ajax
({
url: "http://www.crosseddomain.com/testPage.html,
dataType: "jsonp",
type: "GET",
cache: true,
jsonpCallback: 'handleResponse'
});

jsonpCallback option allows us to specify the call back method which will handle the returned data upon the response is received.
We specify cache as true to avoid the random time stamp value JQuery appends on the request.
dataType is set to jsonp so the call back function will be added to the request.
When the entire configuration is done correctly the page will send a request like http://www.crosseddomain.com/testPage.html&callback=handleResponse through the network.
Now all we have to do is write a function to handle the returned value, the function name should be the value we passed for the jsonpCallback option
function handleResponse(response) {
//your custom logic goes here
}

That will work only if the testPage.html wraps the response with the call back. Or in other words if the requested page is responsible for making the request cross domain compatible by converting the JSON object in to a JSONP object by adding the call back function around it.
What shall we do when we are given a URL that does not support cross domain script calls? How to make it JSONP compatible?


Normal JSON objects will not work across cross domains because of the same origin policy, which restricts browser side programming languages from accessing scripts in another site or a domain. (I will talk about these theory parts in detail on my next blog, so stay tuned).
So obviously it does not restrict server side programming languages like C# from accessing URLs in separate domains. 


Keeping that in mind, we can send the request for the URL from server side, after receiving the normal response we can wrap it with the call back function and we can send it to the page where the call originated from.


To get this done we will use a generic web handler or an ashx file.


For demonstration purposes, the handler will be expecting some string data from the requesting page and it will send a dummy JSON object that contains user inputted data with some custom data from the server side. (To make things simple I will not add the code to do an actual web request, rather I will just hard code the JSON string).


I will add a text box to the page and will get the user input in each key press. After a response is received the call back function will display retrieved data in a label.


The HTML markup will be 

    <div>
        Enter your text : <input type="text" id="suggest" name="Suggestion" />
    div>
    <label id="results">
    label>

And the JQuery request will look something like this.

         $(document).ready(function () {
            $("#suggest").keyup(function () {           
                var data = $("#suggest").val();
                jQuery.ajax({
                    url: "http://localhost:18395/RequestHandler.ashx?data=" + data,
                    dataType: "jsonp",
                    type: "GET",
                    cache: true,
                    jsonpCallback: 'handleResponse'
                });

            });
        });

The RequestHandler.ashx file code is mentioned below.

    public class RequestHandler : IHttpHandler
    {

        public void ProcessRequest(HttpContext context)
        {
            var data = context.Request.QueryString["data"];
            var callback = context.Request.QueryString["callback"];
            var response = context.Response;
            response.Clear();
            response.ContentType = "application/json";
            var sampleJson = "{\"firstName\":\"Dhanushka\",\"lastName\":\"Athukorala\",\"dataReceived\":\"" + data + "\"}";
            var paddedResponse = string.Format("{0}({1})", callback, sampleJson);
            response.Write(paddedResponse);
            response.Flush();
        }
       
        public bool IsReusable
        {
            get
            {
                return false;
            }
        }
    }

So let’s see what’s happening in the code behind file.
  • We retrieve data passed from the user from the query string (data variable)
  • Call back function name is also retrieved from the query string (callback variable)
  • A JSON object is created (this is the line where you have to call a URL and get data, you can use the functionalities implemented in HttpWebRequest, HttpWebResponse to accomplish this)
  • We wrap the final JSON object with the call back function so it will be in the format callback(JSON) making it a JSONP object (paddedResponse variable) 
  • The JSONP object is written to the response making it available for the calling page.
Now when the response is flushed it will return back to the calling page. Here the script tag of that page will execute received data or the JSONP object, or in other words it will call the call back method with a JSON object.

The code for our call back method is mentioned below.

        function handleResponse(response) {
            var data = eval(response);
            var firstName = data.firstName;
            var lastName = data.lastName;
            var sentData = data.dataReceived;
            $("#results").html("First Name : " + firstName + "
Last Name : "
+ lastName + "
Sent Data : "
+ sentData);
        }

This call back method will process the returned JSON string and will print the containing information in the results label

So in a nut shell the codes we need are

Script tag and the HTML

Handler code

And the out put will be 

Hope it helps.

I will explain the underlying principals of cross domain communication with scripts in the second part of this blog.

Cheers!!


I got my blogger hat on after a very long time. With all the week end lectures and office works cramping up my schedule, there was virtually no time left for me to blog about anything. I came across pretty interesting topics and subjects over my past few non blogging months and will try to share them with you in the coming days.
Ok let’s get started…

Let’s say we are writing a SQL query to retrieve the names of the customers who are above 20, we will write something like this

select cust.name
from customers cust
where cust.age>20

But when it comes to LINQ, as we all know the syntax is totally inverted.
The same query will look like this when we do it on a list of Customer objects.

IEnumerable<string> olderCustomersQuery = from cust in customers
                                          where cust.Age > 20
                                          select cust.Name;


But WHY!!!

To understand we need to get back to basics.

There are two ways to build up a LINQ expression, Extension methods and Query operators. The one mentioned above falls in to the latter part.
Let’s see how to write the same example using extension methods.

IEnumerable<string> olderCustomersExtension = customers
                                              .Where(cust => cust.Age > 20)
                                              .Select(cust => cust.Name);


All these extension methods are available on any class that implements the generic IQueryable or IEnumerable<T> interfaces.


So what’s up with the ordering of the method calls, why do we need to call the “Where” extension method before calling “Select”.

The answer is simple.

Where filters data that are returned from a data source.
While Select narrows down the information that shown in the final projection by specifying which properties to retrieve.

So when I say customers.Select(cust => cust.Name) it just gives me a list of strings so customers.Select(cust => cust.Name).Where(cust => cust.Age > 20) will not be possible because now the Where clause don’t get a customer object to query on.


Because of that when we are writing LINQ expressions we should use the extension methods in the correct order, otherwise you may get unexpected query results or compilation errors.

Although the LINQ extension methods and types that are defined in System.Linq namespace are very powerful, the syntax can be quite cumbersome, and it is easy to introduce subtle errors that are difficult to spot and correct. 
To avoid these drawbacks designers of C# have provided an alternative approach to the use of LINQ, through a series of LINQ query operators that are part of the C# language.

There is an equivalent query operator for each LINQ extension method.

Because these query operators are built to compliment the functionality of the equalant extension methods. They should also follow the above mentioned practices and rules to avoid any kind of side effects on the data that will be retrieved by the LINQ expression.

Hope this clears things up!!


We all know that data manipulation using tables have been made very easy by LINQ to SQL right! For an example a sample data retrieval and update query using LINQ will look something like this.

I'll just note down the steps involved so rookies will have an idea of what I'm talking about

  1. Add a new LINQ to SQL class file (.dbml) to your project
  2. Then drag and drop a table you want to query using the server explorer of VS (lets name the new file as Demo.dbml)
Here is some sample code to get you started with

var context = new DemoDataContext(); //get the data context you want to query
//retreive data from the table
var records = from tableTest in dc.DemoTable
              where tableTest.condition == -1
              select tableTest;


Now let's play with them a bit

foreach (var record in records){
  try

   {
    var rowInfo = context.DemoTable.Single(ob => ob.Id == rec.Id);
    var someValue=GetValue();
    rowInfo.condition = someValue > 0 ? 1 : 0; //Change a value of a property
    context.SubmitChanges();//Save the changed data to the table
   }
  catch (Exception ex)

  {
    Console.WriteLine(ex.Message);
    continue;

  }
}

That's really straight forward right! We can use the same set of steps to manipulate data using SQL Views. But there seems to be some sort of a bug in VS2010 when using LINQ to SQL class files. When we drag and drop a table to a dbml file the primary key of the table is identified and is set correctly. But if we use a VIEW that incorporates a set of tables using UNIONS this does not happen. Making it impossible to save or update data in the underlying table because of the missing primary key. You will run the same code for a view but no data will get saved.
 To avoid such a bizarre situation you need to manually set the primary key of the view using properties. To do that select the view that you needs to work on and in its attributes set click on its primary key column and go to its properties. And set the Primary Key property to true.

Now you will be able to work with your views without any problems.

I came across a very useful plug-in few days back for drawing Gantt charts using JQuery. You can find all the information plus the files you need to download from here (https://github.com/thegrubbsian/jquery.ganttView).

As I described previously from a blog when we drew JQuery charts, we can use the power of JSON to get serialized data from the server side to populate the needed data model. In this case the the data model looked something like this.

var ganttData =
[
    {
        id: 1, name: "Feature 1"
        series: 
        [
            { name: "Planned", start: new Date(2010,00,01), end: new Date(2010,00,03) },
            { name: "Actual", start: new Date(2010,00,02), end: new Date(2010,00,05) }
        ]
    },
    {
        id: 2, name: "Feature 2" 

        series: 
        [
            { name: "Planned", start: new Date(2010,00,05), end: new Date(2010,00,20) },
            { name: "Actual", start: new Date(2010,00,06), end: new Date(2010,00,17) },
            { name: "Projected", start: new Date(2010,00,06), end: new Date(2010,00,17) }
        ]
    }
]

 
In order to tackle this, my server side data model should look something like this.

public class ContainerModel

{
   public String id { get; set; }
   public String name { get; set; }
   public List<DataModel> series { get; set; }

}
public class DataModel
{
   public string name { get; set; }
   public DateTime start { get; set; }
   public DateTime end { get; set; }

}
 

Let's write a sample method to populate the data model with some dummy data.

public static List<ContainerModel> PopulateData(){
  var startDate = new DateTime(2010, 5, 12);
  var endDate = new DateTime(2010, 12, 5);


  var planned = new DataModel { name = "Planned", start = startDate, end = endDate };
  var actual = new DataModel { name = "Actual", start = startDate, end = endDate };
  var projected = new DataModel { name = "Projected", start = startDate, end = endDate };


  var dataSet = new List<ContainerModel>();
  for (int k = 0; k < 10; k++)
{
    var container = new ContainerModel
{
                          id = k.ToString(),
                          name = string.Format("Feature {0}", k),
                          series = new List<DataModel> { planned, actual, projected }
                        };
    dataSet.Add(container); 

  }
  return dataSet;

}

And these methods were used to serialize the data model to JSON so we can use it from the JavaScript side.

private static  string ToJson(this object obj){
  var serializer = new DataContractJsonSerializer(obj.GetType());
  var ms = new MemoryStream();

  serializer.WriteObject(ms, obj);
  var json = Encoding.Default.GetString(ms.ToArray());
  return json;

}

public static string GetJason(List<ContainerModel> dataModel)
{
  return dataModel.ToJson();

}


Let's write a web method so we can use a JQuery AJAX call back to retrieve the prepared data.
I have embedded the above mentioned PopulateData, ToJson, GetJason methods in to a Helper class for more code clarity

[WebMethod]
public static string GetData()
{
  var modelData = Helper.PopulateData();
  var jsonData = Helper.GetJason(modelData);
  return jsonData;

}

Now all our server side codes are completed, but then come the fun part. As you can see in the above mentioned data model the dates are represented as JavaScript Date objects
{ name: "Planned", start: new Date(2010,00,05), end: new Date(2010,00,20) },
But date returned by Json is a string that looks something like this "/Date(1273602600000+0530)/" .
In order to convert that JSON serialized millisecond date string to a JavaScript Date object, we can use JavaScript substring function with eval
startDate = eval("new " + start.substring(1, start.length - 7) + ")");
Resulting in returning a JavaScript Date object with a value like
"Wed May 12 2010 00:00:00 GMT+0530 (Sri Lanka Standard Time)"

So let's replace the DateUtils Class in the jQuery.ganttView.js with this one

var DateUtils = {
  daysBetweenBothString: function (start, end){
    var startDate, endDate;

    startDate = eval("new " + start.substring(1, start.length - 7) + ")");
    endDate = eval("new " + end.substring(1, end.length - 7) + ")");
    var count = 0, date = startDate.clone();
    while (date.compareTo(endDate) == -1) { count = count + 1; date.addDays(1); }
    return count;

  },
  daysBetweenEndString: function (start, end){
   endDate = eval("new " + end.substring(1, end.length - 7) + ")");
    var count = 0, date = start.clone();
    while (date.compareTo(endDate) == -1) { count = count + 1; date.addDays(1); }
    return count;

  },
  daysBetween: function (start, end){
    var count = 0, date = start.clone();
    while (date.compareTo(end) == -1) { count = count + 1; date.addDays(1); }
    return count;

  },
  isWeekend: function (date){
    return date.getDay() % 6 == 0;

  }
};

And the addBlocks method with this

addBlocks: function (div, data, cellWidth, start){
  var rows = jQuery("div.ganttview-blocks div.ganttview-block-container", div);
  var rowIdx = 0;
  for (var i = 0; i < data.length; i++) {
   for (var j = 0; j < data[i].series.length; j++) {
   var size = DateUtils.daysBetweenBothString(data[i].series[j].start, data[i].series[j].end);
   var offset = DateUtils.daysBetweenEndString(start, data[i].series[j].start);
   var blockDiv = jQuery("<div>", {
     "class": "ganttview-block",
     "title": data[i].series[j].name + ", " + size + " days",
     "css": {
     "width": ((size * cellWidth) - 9) + "px",
     "margin-left": ((offset * cellWidth) + 3) + "px"

    }});
  if (data[i].series[j].color){

   blockDiv.css("background-color", data[i].series[j].color);
  }
  blockDiv.append($("<div>", { "class": "ganttview-block-text" }).text(size));
  jQuery(rows[rowIdx]).append(blockDiv);
  rowIdx = rowIdx + 1;
  }
  }
}

Now you can use this modified version of the JS file to support your JSON serialized data object.
In the aspx page (in this case Default.aspx ) where we need to draw the chart insert the following script

<script type="text/javascript">
$(document).ready(function (){
var ganttData = [];

$("#btnGetData").live("click", function (e){
  var button = $(this);

  function onDataReceived(series){

    ganttData = series.d;
    $("#ganttChart").ganttView({
    data: $.parseJSON(ganttData),
    start: new Date(2010, 05, 01),
    end: new Date(2010, 12, 31),
    slideWidth: 900
   });
  }
  var dataUrl = "Default.aspx/GetData";

  $.ajax({
   type: "POST",
   url: dataUrl,
   data: "{}",
   contentType: "application/json; charset=utf-8",
   dataType: "json",
   success: onDataReceived
  });
  });
});
</script>

And add the following div tag to the page so the JS file can inject the rendered Chart inside it
<div id="ganttChart"></div>
Accompanied with the button to trigger the AJAX call back
<div><input id="btnGetData" type="button" value="Get Data" /></div>

Now you are all set for your first JQuery Gantt Chart.