Thursday 15 December 2016

Passing data from Parent to Child Component

Passing data to a nested component

In Angular 1 we used $broadcast, $emit, $on to communicate between controllers. But now in angular 2 we have different approach.

In Angular 2, If you want to flow data from Parent Component to Child Component we use @Input Decorator or "inputs" property inside @Component decorator or in a Class. Hear we will discuss the former approach.

Our Directory Structure

Parent Component

//our root app component
import { Component, NgModule } from '@angular/core'
import { BrowserModule } from '@angular/platform-browser'
import { FormsModule }   from '@angular/forms';
import { ChildComponent } from './child.component';
import { Character } from './character';

@Component({
  selector: 'my-app',
  template: `
  <h1>@input</h1>
  <div style = "border:2px solid orange; padding:5px;">
    <h2>Parent Component</h2>
    <input [(ngModel)] = "characters[0].name"/>
    <button (click)="select(characters[0])">Input</button>
    <br/><br/> 
    <child-component [character]="selectedCharacter"></child-component>
  </div>
  `,
})
export class App {
  characters = [
    {
      "id":11,
      "name":"Name"
    }];
     
    selectedCharacter: Character;
    
    select(selectedCharacter : Character){
      this.selectedCharacter = selectedCharacter;
    }
}

@NgModule({
  imports: [ BrowserModule,FormsModule  ],
  declarations: [ App, ChildComponent ],
  bootstrap: [ App ]
})
export class AppModule {}

Child Component

import { Component, Input} from '@angular/core';
import { Character} from './character';
@Component({
  selector: 'child-component',
  template:`
  <div style="border:2px solid orange; padding:5px">
    <h2>Child Component: <span *ngIf="character">{{character.name}}</span></h2>
  </div>
  `
})
export class ChildComponent {
   @Input() character: Character;
}

Full source

Find full source from my github

Friday 11 November 2016

platform-browser VS platform-browser-dynamic

In angular2. We must have seen in every project there is plugin called platform-browser and platform-browser-dynamic.

 "@angular/platform-browser": "2.0.0-rc.4",
 "@angular/platform-browser-dynamic": "2.0.0-rc.4",
  

Let us understand the difference.

The difference between platform-browser-dynamic and platform-browser is the way your angular app will be compiled.

@angular/platform-browser

  • It contains code shared for browser execution (DOM thread, WebWorker)
  • Ahead-of-Time pre-compiled version of application being sent to the browser. Which usually means a significantly smaller package being sent to the browser.

@angular/platform-browser-dynamic

  • It contains the client side code that processes templates (bindings, components, ...) and reflective dependency injection.
  • Uses Just-in-Time compiler and make's application compile on client-side.

When the offline template compiler is used, platform-browser-dynamic isn't necessary because all reflective access and metadata are converted to generated code.

Saturday 22 October 2016

Javascript Modules VS Angular2 Modules

Javascript Modules Angular2 Modules
Code file that import or export something Code file that organise the application
Organize our code file Organize our application
Modularize our code Modularize our application
Promote Code Reuse Promote Application Boundaries


Angular2 Architecture in Breif

The architecture diagram identifies the eight main building blocks of an Angular application:

  • Modules
    • Angular apps are modular and Angular has its own modularity system called Angular modules or NgModules.
  • Components
    • A component controls a patch of screen called a view.
  • Templates
    • You define a component’s view with its companion template. A template is a form of HTML that tells Angular how to render the component.
  • Metadata
    • Metadata tells Angular how to process a class.
  • Data binding
  • Directives
    • @Component requires a view whereas a @Directive does not. Directives add behaviour to an existing DOM element.
  • Services
    • Service is a broad category encompassing any value, function, or feature that your application needs.
  • Dependency injection
    • Dependency injection is a way to supply a new instance of a class with the fully-formed dependencies it requires. Most dependencies are services.

Wednesday 31 August 2016

IEnumerable VS ICollection

Enumerable

First of all, it is important to understand, that there are two different interfaces defined in the .NET base class library. There is a non-generic IEnumerable interface and there is a generic type-safe IEnumerable interface.

The IEnumerable interface is located in the System.Collections namespace and contains only a single method definition. The interface definition looks like this:


    public interface IEnumerable
    {
        IEnumerator GetEnumerator();
    }
    

It is important to know that the C# language foreach keyword works with all types that implement the IEnumerable interface.

IEnumerable

Let’s now take a look at the definition of the generic and type-safe version called IEnumerable which is located in the System.Collections.Generic namespace:


    public interface IEnumerable<out T> : IEnumerable
    {
        IEnumerator GetEnumerator();
    }
    

As you can see the IEnumerable interface inherits from the IEnumerable interface. Therefore a type which implements IEnumerable has also to implement the members of IEnumerable.

ICollection

As you can imagine, there are also two versions of ICollection which are System.Collections.ICollection and the generic version System.Collections.Generic.ICollection.

Let’s take a look at the definition of the ICollection interface type:


    public interface ICollection : IEnumerable
    {
        int Count { get; }  
        bool IsSynchronized { get; }
        Object SyncRoot { get; }
     
        void CopyTo(Array array, int index);
    }
    

ICollection inherits from IEnumerable. You therefore have all members from the IEnumerable interface implemented in all classes that implement the ICollection interface.

ICollection

When we look at the generic version of ICollection, you’ll recognize that it does not look exactly the same as the non-generic equivalent:


    public interface ICollection<T> : IEnumerable<T>, IEnumerable
    {
        int Count { get; }
        bool IsReadOnly { get; }
     
        void Add(T item);
        void Clear();
        bool Contains(T item);
        void CopyTo(T[] array, int arrayIndex);
        bool Remove(T item);
    }
    

Which type should you depend on?

Tuesday 30 August 2016

What is Yield in CSharp

Short Answer

Yield keyword helps us to do custom stateful iteration over .NET collections. There are two scenarios where yield keyword is useful:-

  • Customized iteration through a collection without creating a temporary collection.
  • Stateful iteration.

Long Answer

First Scenario:- Customized iteration through a collection

Let's try to understand what customized iteration means with an example. Consider the below code.

Let say we have a simple list called as "MyList" which has collection of 5 continuous numeric values 1,2,3,4 and 5. This list is iterated from console application from within static void main method.

For now let's visualize the "main()" method as a caller. So the caller i.e. "main()" method calls the list and displays the items inside it. Simple…till now".

  static List MyList = new List();
  static void Main(string[] args)
  {
    MyList.Add(1);
    MyList.Add(2);
    MyList.Add(3);
    MyList.Add(4);
    MyList.Add(5);
    foreach (int i in MyList) // Itterates through the list
    {
      Console.WriteLine(i);
    }
    Console.ReadLine();
  }

Now let me complicate this situation let's say the caller only wants values greater than "3" from the collection. So the obvious thing as a c# developer we will do is create a function as shown below. This function will have temporary collection. In this temporary collection we will first add values which are greater than "3" and return the same to the caller. The caller can then iterate through this collection.

  static IEnumerable FilterWithoutYield()
  {
    List temp = new List();
    foreach (int i in MyList)
    {
        if (i > 3)
        {
            temp.Add(i);
        }
    }
    return temp;
  } 

Now the above approach is fine but it would be great if we would get rid of the collection, so that our code becomes simple. This where "yield" keyword comes to help. Below is a simple code how we have used yield. "Yield" keyword will return back the control to the caller, the caller will do his work and re-enter the function from where he had left and continue iteration from that point onwards. In other words "yield" keyword moves control of the program to and fro between caller and the collection.

static IEnumerable FilterWithYield()
{
  foreach (int i in MyList)
  {
      if (i > 3) yield return i;
  }
}  

So for the above code following are details steps how the control will flow between caller and collection. You can also see the pictorial representation in the next diagram shown below.

  • Step 1:- Caller calls the function to iterate for number's greater than 3.
  • Step 2:- Inside the function the for loop runs from 1 to 2 , from 2 to 3 until it encounters value greater than "3" i.e. "4". As soon as the condition of value greater than 3 is met the "yield" keyword sends this data back to the caller.
  • Step 3:- Caller displays the value on the console and re-enters the function for more data. This time when it reenters, it does not start from first. It remembers the state and starts from "5". The iteration continues further as usual.

Second Scenario:- Stateful iteration

Now let us add more complications to the above scenario. Let's say we want to display running total of the above collection. What do I mean?.

In other words we will browse from 1 to 5 and as we browse we would keep adding the total in variable. So we start with "1" the running total is "1", we move to value "2" the running total is previous value "1" plus current value "2" i.e. "3" and so on.

In other words we would like to iterate through the collection and as we iterate would like to maintain running total state and return the value to the caller ( i.e. console application). So the function now becomes something as shown below. The "runningtotal" variable will have the old value every time the caller re-enters the function.

Tipical implementation

  1. Caller calls the GetRunningTotal method.
  2. Running total of all items in Numbers list is calculated at returned to caller.
    Initital Sum = [0] [0] + {1} => [1] [1] + {2} => [3] [3] + {3} => [6] [6] + {4} => [10] [10] + {5} => [15] [15] + {6} => [21] [21] + {7} => [28] [28] + {8} => [36] [36] + {9} => [45]
  3. Caller iterates over each running total and prints it on console.
    Print [1];
    Print [3];
    Print [6];
    Print [10];
    Print [15];
    Print [21];
    Print [28];
    Print [36];
    Print [45];
    Where list items are in {} brackets and running total (at some point) is in [] bracket.
class Program
{
        static List Numbers = new List { 1, 2, 3, 4, 5, 6, 7, 8, 9 };

        static void Main(string[] args)
        {
            int itemsTraversed = 0;
            foreach (var item in GetRunningTotal())
            {
                Console.WriteLine("Running total of first {0} items is {1}", ++itemsTraversed, item);
            }
            Console.ReadKey();
        }

        static IEnumerable GetRunningTotal()
        {
            List runningTotals = new List();
            int runningTotal = 0;
            foreach (int number in Numbers)
            {
                Console.WriteLine("Adding {0} in running total", number);
                runningTotal += number;
                runningTotals.Add(runningTotal);

            }
            Console.WriteLine("\n\nReturn Running Total\n\n");
            return runningTotals;
        }
}

Please note that GetRunningTotal method is called only once and it returns all the subsequent running totals to the caller.

Yield Implementation

  1. Caller calls the GetRunningTotal method.
  2. Running total of first n items is calculated and returned to caller.
    [n-1]th running total + {n}th item => [n]th running total
  3. Caller prints it on console.
    Print [n]th running total;
  4. Control is given back to GetRunningTotal method which remembers it’s running state.
  5. Repeat step 2 , 3 and 4 untill all items are iterated.
class Program
{
        static List Numbers = new List { 1, 2, 3, 4, 5, 6, 7, 8, 9 };

        static void Main(string[] args)
        {
            int itemsTraversed = 0;
            foreach (var item in GetRunningTotal())
            {
                Console.WriteLine("Running total of last {0} items is {1}", ++itemsTraversed, item);
            }
            Console.ReadKey();
        }

        static IEnumerable GetRunningTotal()
        {
            int runningTotal = 0;
            foreach (int number in Numbers)
            {
                if (number > 1)
                    Console.WriteLine("Control is back and next item to be summed is {0}\n\n", number);
                runningTotal += number;
                yield return (runningTotal);

            }
        }
}

The above code will output the values 1,3,6,10,15,21,28,36,45. Because of the pause/resume behavior, the variable total will hold its value between iterations. So it can be very handy to do stateful calculations using yield keyword.

Monday 22 August 2016

Function Overloding, Polymorphism, Method Overloding in Javascript

I often do this...

C#

In CSharp

   public string CatStrings(string p1)                  {return p1;}
   public string CatStrings(string p1, int p2)          {return p1+p2.ToString();}
   public string CatStrings(string p1, int p2, bool p3) {return p1+p2.ToString()+p3.ToString();}

   CatStrings("one");        // result = one
   CatStrings("one",2);      // result = one2
   CatStrings("one",2,true); // result = one2true
                

JS

In JavaScript

 function CatStrings(p1, p2, p3)
 {
   var s = p1;
   if(typeof p2 !== "undefined") {s += p2;}
   if(typeof p3 !== "undefined") {s += p3;}
   return s;
 };

 CatStrings("one");        // result = one
 CatStrings("one",2);      // result = one2
 CatStrings("one",2,true); // result = one2true
                

What most developers do is...

JS

JavaScript

The best way to do function overloading with parameters is not to check the argument length or the types; checking the types will just make your code slow and you have the fun of Arrays, nulls, Objects, etc.

What most developers do is tack on an object as the last argument to their methods. This object can hold anything.

 function foo(a, b, opts) {

 }

 foo(1, 2, {"method":"add"});
 foo(3, 4, {"test":"equals", "bar":"tree"});
                

Then you can handle it anyway you want in your method. [Switch, if-else, etc.]

Friday 12 August 2016

Connect Mongodb on a Microsoft Azure VM from Remote Machine

Create Directory for MongoData

    C:\> mkdir \MongoData
    C:\> mkdir \MongoLogs

Install mongod.exe as a service

    C:\Program Files\MongoDB\Server\3.2\bin>mongod --dbpath C:\MongoData\ --logpath C:\MongoLogs\mongolog.log --logappend  --install

Start the Service

    start the service
Now that MongoDB is installed and running, you'll need to open a port in Windows Firewall so you can remotely connect to MongoDB. From the Start menu, select Administrative Tools and then Windows Firewall with Advanced Security.

Inbound Rules

Configure Endpoint for MongoDB in Azure

Test your Mongodb from Local Machine

Tuesday 9 August 2016

Javascript Reference Data Type and Primitive Data Types

One of the main differences between reference data type and primitive data types is reference data type’s value is stored as a reference, it is not stored directly on the variable, as a value, as the primitive data types are. For example:

 // The primitive data type String is stored as a value​
 ​var person = "Nisar";  
 ​var anotherPerson = person; // anotherPerson = the value of person​
 person = "Rahul"; // value of person changed​
 ​
 console.log(anotherPerson); // Nisar​
 console.log(person); // Rahul

It is worth noting that even though we changed person to “Rahul,” the anotherPerson variable still retains the value that person had.

Compare the primitive data saved-as-value demonstrated above with the save-as-reference for objects:

 var person = {name: "Nisar"};
 ​var anotherPerson = person;
 person.name = "Rahul";
 ​
 console.log(anotherPerson.name); // Rahul​
 console.log(person.name); // Rahul

In this example, we copied the person object to anotherPerson, but because the value in person was stored as a reference and not an actual value, when we changed the person.name property to “Rahul” the anotherPerson reflected the change because it never stored an actual copy of it’s own value of the person’s properties, it only had a reference to it.

Saturday 23 July 2016

Convert DOC to HTML with Images

We will be using OpenXML and OpenXmlPowerTools to convert Word document into HTML.

Step 1

Install Required Package

Install-Package DocumentFormat.OpenXml

Install-Package OpenXmlPowerTools

Add Reference

Right click in you Project in Solution Explorer
then Add >> Reference >> Select System.Drawing and WindowsBase

Follow the CODE Below

Fork me on GITHUB

https://github.com/niisar/WordToHTML

Sunday 17 July 2016

List vs Dictionary vs Sets in Dot NET

List

Any index based collection are known as List and there index starts with 0

Dictionary

Dictionaries will let you use any type you want as the key, integers, dates, strings.

Sets

The focus is not on direct access to an element, but treating it as a single group and performing operations on the collection as a whole.

Friday 17 June 2016

Understanding Basic of Owin and Katana

OWIN

OWIN (the Open Web Interface for .NET) is an open-source specification describing an abstraction layer between web servers and application components.
OWIN is a specification, not an implementation.

Katana

Katana provides an implementation of the OWIN specification
For our purposes, we will use very basic components from Katana to demonstrate and understand: Now let's get our hand dirty

Creating a Barebones Katana Application

  1. Install-Package Microsoft.Owin.Hosting via Nuget Package Manager Console
  2. Install-Package Microsoft.Owin.Host.HttpListener via Nuget Package Manager Console
In the Katana implementation of the OWIN specification, the host will use reflection and scan the loaded assemblies for a type named Startup with a method with the name and signature void Configuration(IAppBuilder).
The IAppBuilder interface is NOT a part of the OWIN specification. It is, however, a required component for a Katana host. The IAppBuilder interface provides a core set of methods required to implement the OWIN standard, and serves as a base for additional extension methods for implementing middleware.

Running the Application


Running the Application with Multiple Middlewares in the Pipeline


Comment Out Call to Invoke

Monday 23 May 2016

Part 5 of 5: Notes on HTTP and Routing in Angular2

Part 4 of 5: Notes on Dependency Injection in Angular2

All example are based on Angular2 RC 1

DI - Class constructor injection

DI - Building a service

DI - Provider registration at Bootstrap and The Inject decorator

DI - The opaque token


Other Post in Series:

Part 3 of 5: Notes on Forms and Pipes in Angular2

All example are based on Angular2 RC 1

Forms - Template Driven Forms

Forms - Model Driven Forms

Forms - Validation—built in

Forms - Validation—custom

Forms - Error handling


Other Post in Series:

Part 2 of 5: Notes on Directive and Pipes in Angular2

Part 1 of 5: Component in Angular2

All example are based on Angular2 RC 1

Components - Displaying data in our templates

Components - Working with Events

Components - Using Property

Components - Using more complex data

Components - Using Sub-Component

Components - Getting data to the component with input

Components - Subscribing to component events with output

Components - Getting data to the component with @input

Components - Subscribing to component events with @output


Other Post in Series:

Sunday 15 May 2016

My Notes from lynda.com (Learn AngularJS 2: The Basics)

All example are based on Angular2 RC 1

Displaying data in our templates

Working with Events

Using Property

Using more complex data

Using Sub-Component

Getting data to the component with input

Subscribing to component events with output

Getting data to the component with @input

Subscribing to component events with @output


Saturday 7 May 2016

What is Apache Hadoop ?

Hadoop brings the ability to cheaply process large amounts of data, regardless of its structure.

The Core of Hadoop: MapReduce

The important innovation of MapReduce is the ability to take a query over a dataset, divide it, and run in parallel over multiple nodes. Distributation the computation solves the issue of data too large to fit onto a single machine. Combine this technique with commodity Linux server and you have a cost-effective alternative to massive computing arrays.

Programming Hadoop at the MapReduce level is a case of working with the Java APIs, and manually loading data files into HDFS (Hadoop Distributed File System).

Programmability

Hadoop offers two solutions for making Hadoop programming easier.

Programming Pig

Pis is a programming language that simplifies the common tasks of working with Hadoop: loading data, expressing transformations on the data, and storing the final results.

Programming Hive

Hive enables Hadoop to operate as a data warehouse. It superimposes structure on data in HDFS and then permits queries over the data using a familiar SQL-like syntax. As with Pig, Hive's core capabilities are extensible.

Choosing between Hive and Pig can be confusing. Hive is more suitable for data warehousing tasks, with predominatly static structure and the need for frequent analysis. Hive's closeness to SQL makes it an ideal point of integration between Hadoop and other business intelligence tools.

Pig gives the developer more agility for the exploration of large datasets, allowing the development of succinct scripts for transforming data flows for incorporation into larger applications.

The Hadoop Bestiary

  • Ambari Deployment, configuration and monitoring
  • Flume Collection and import of log and event data
  • HBase Column-oriented database scaling to billions of rows
  • HCatalog Schema and data type sharing over Pig,Hive and MapReduce
  • HDFS Distributed redundant file system for Hadoop
  • Hive Data warehouse with SQL-Like access
  • Mahout Library if machine learning and data mining algorithms
  • MapReduce Parallel computation on server clusters
  • Pig High-level programming language for Hadoop computations
  • Oozie Orchestration and workflow management
  • Sqoop Imports data from relational databases
  • Whirr Cloud-agnostic deployment of clusters
  • Zoookeeper Configuration management and coodination

Getting data in and out: Sqoop and Flume

Improved interoperability with the rest of the data world is provided by Sqoop and Flume. Sqoop is a tool designed to import data from relational databases into Hadoop, either directly into HDFS or into Hive. Flume is designed to import streaming flows of log data directly into HDFS.

Hive's SQL friendliness means that it can be used as a point of integration with vast universe of database tools capable of making connections via JBDC or ODBC database drivers.

Coordination and Workflow: Zookeeper and Oozie

As cmputing nodes can come and go, members of the cluster need to synchronize with each other, know where to access services, and know how they should be configured. This is the purpose of Zookeper

The Oozie component provides features to manage the workflow and dependencies, removing the need for developers to code custom solutions.

Management and Deployment: Ambari and Whirr

Ambari is intended to help system administrators deploy and configure Hadoop, upgrade clusters, and monitor services. Through and API, it may be integrated with other system management tools.

Whirr is a highly complementary componentary component. It offers a way of running services, including Hadoop, on cloude pltforms. Ehirr is cloud neutral and currently supports. Whirr is cloud neutral and currently supports the Amazon EC2 and Rackspace services.

Machine Learning: Mahout

Every organization's data are diverse and particular to their needs. However, there is much less diversity in the kinds of analyses performes on the data. The Mahout project is a library of Hadoop implementations of common analytical computations. Use cases include user collaborative filtering, user recommendations, clustering, and classification.

What is Big Data ?

Big data is data that exceeds the processing capacity of conventional database systems.

The value of big data to an organization falls into two categories: analytical use and enabling new product.

What Does Big Data Look Like?

Input data to big data systems could be chatter from social networks, web server logs, traffic flow sensors, sattellite imagery, broadcast audio streams, banking transactions, MP3s of rock music, the content of web pages, scans of goverments, GPS trails, telemetru from automobiles, financial market data, the list goes on.

To clarify matters, the three V's of Volume, Velocity and Variety are commonly used to characterize different aspects of big data.

Volume

The benifit gained from the ability to process large amount of information is the main attraction of big data analytics.

Many companies already have large amount of archived data, perhaps in the form of logs, but not the capacity to process it.

Velocity

It's not just the velocity of the incoming data that's the issue: it's possible to stream fast-moving data into bulk storage for later batch processing.

Variety

Rarely does data present itself in a form perfectly ordered and ready for processing. A common theme in big data systems is that the source data is diverse, and doesn't fall into neat relational structures. It could be text from social networks, image data, a raw feed directly from a sensor source. None of these things come ready for integration into an application.

Tuesday 3 May 2016

(Part 1 of 2) How to Create Simple Windows Service and Log with Log4Net

Our windows service will support two modes

  1. Interval Mode Executes a task at regular intervals after some delay
  2. Daly Mode Executes a task at specific time of day

Create New Windows Service Project and Add aap.config file

Add Reference (system.configuration)

Code Your Service

Adding an Installer to the Windows Service and Write Code

Setting the Windows Service Name and StartType

Making the Windows Service Automatically start after Installation

Compile and Install Service

Test Our Window Service

Get Source Code GitHub

Monday 2 May 2016

Three Moral Code for Designing WEB API (Security, Stability, Documentation)

Security

There are many methods to Secure your api but two are most widely used. Token-based authentication and OAuth 2 + SSL

Token-based authentication

For most APIs, I prefer a simple token-based authentication, where the token is a random hash assigned to the user and they can reset it at any point if it has been stolen. Allow the token to be passed in through POST or an HTTP header.

OAuth 2 + SSL

Another very good option is OAuth 2 + SSL. You should be using SSL anyway, but OAuth 2 is reasonably simple to implement on the server side, and libraries are available for many common programming languages.

Here are some other important things to keep in mind:

  • Whitelisting Functionality. APIs generally allow you to do basic create, read, update, and delete operations on data. But you don’t want to allow these operations for every entity, so make sure each has a whitelist of allowable actions. Make sure, for example, that only authorized users can run commands like /user/delete/{id}. If it doesn’t, then send back an error message such as a 406 Not Acceptable response.
  • Protect yourself against Cross-Site Request Forgery (CSRF). If you are allowing session or cookie authentication, you need to make sure that you’re protecting yourself from CSRF attacks.
  • Validate access to resources. In every request, you need to verify that a user is in fact allowed access to the specific item they are referencing.

Stability and Consistency

Let's say you have a api http://niisar.com/api/friendlist and it response JSON Data. This seems fine at first. But what happen when you need to modify the format of JSON? Everyone that’s already integrated with you is going to break. Oops.

So do some planning ahead, and version your API from the outset, explicitly incorporating a version number into the URL like http://niisar.com/api/v1/friendlistso that people rely on v1 of API.

Also use inheritance or a shared architecture to reuse the same naming conventions and data handling consistently throughout your API.

Finally, you need to record and publish a changelog to show differences between versions of your API so that users know exactly how to upgrade.

Documentation and Support

Documentation may be boring but if you want anyone to use your API, documentation is essential. You’ve simply got to get this right. It’s the first thing users will see, so in some ways it’s like the gift wrap. Present well, and people are more likely to use your API.

Fortunately, there are number of software tools that facilitate and simplify the task of generating documentation. Or you can write something yourself for your API

But what separates great documentation from adequate documentation is the inclusion of usage examples and, ideally, tutorials. This is what helps the user understand your API and where to start. It orients them and helps them load your API into their brain.

Make sure that API can get up and running with at least a basic implementation of your API, even if it’s just following a tutorial, within a few minutes. I think 15 minutes is a good goal.

Some specific recommendations to ease and facilitate adoption of your API:

  • Make sure people can actually use your API and that it works the first time, every time.
  • Keep it simple. so that developers only have to learn your API, not your API + 10 obscure new technologies.
  • Provide language-specific libraries to interface with your service.
  • Simplify any necessary signup.
  • Provide excellent support. A big barrier to adoption is lack of support. How will you handle and respond to a bug report? What about unclear documentation? An unsophisticated user? Forums, bug trackers, and email support are fantastic starts, but do make sure that when someone posts a bug, you really address it. Nobody wants to see a ghost town forum or a giant list of bugs that haven’t been addressed

How To Set Up A Print Style Sheet

You can use CSS to change the appearance of your web page when it's printed on a paper. You can specify one font for the screen version and another for the print version.

You just need to press Ctrl + P to print or call Print function from javascript window.print(); Both are same thing.

The css for printing looks like

Saturday 30 April 2016

Integrating Fluent Validation in Web API using Autofac

Create New Project

Install Necessary Packages

  Install-Package Autofac
  Install-Package Autofac.WebApi2
  Install-Package FluentValidation
  Install-Package FluentValidation.WebApi
  Install-Package Microsoft.AspNet.WebApi
  Install-Package Microsoft.AspNet.WebApi.Owin
  Install-Package Microsoft.Owin.Host.SystemWeb
  Install-Package Owin
  Install-Package Newtonsoft.Json
 

Create Partial Startup Classes

Partial class is just splitting class file into two or more files, and all parts are combined when the application is compiled.

It is used in a situation

  • When working on large projects, spreading a class over separate files enables multiple programmers to work on it at the same time.
  • When working with automatically generated source, code can be added to the class without having to recreate the source file. Visual Studio uses this approach when it creates Windows Forms, Web service wrapper code, and so on. You can create code that uses these classes without having to modify the file created by Visual Studio.
  • To split a class definition, use the partial keyword modifier

While adding this class our project structure looks like.

Startup.Autofac.cs

Startup.WebApi.cs

Startup.cs

Add Infrastructure classes

Now our Project structure looks like

AutofacValidatorFactory.cs

AutofacWebModule.cs

ValidateFilterAttribute.cs

Add Modal Validation and Controller

Now our Project structure looks like

Controller

Model and Model Validation

Finally test our code




Thanks,


Download Code

Sunday 24 April 2016

(Part 4 of 4 ) Accessing FILESTREAM Tables

Create a FILESTREAM-enabled table

Add Rows with a simple text BLOB using CAST

Delete FILESTREAM data

(Part 3 of 4 ) Creating FILESTREAM Database

Create database with FILESTREAM filegroup/container

Add a FILESTREAM filegroup to the database

(Part 2 of 4 ) Enabling FILESTREAM for Windows and SQL Server

FILESTREAM must be enabled twice, once by the Windows administrator and then again by the SQL Server administrator. The reason for this is because FILESTREAM is somewhat of a hybrid feature. Yes it's primarily a SQL Server feature, but because of its tight integration with the NTFS file system on Windows, it requires a file system filter drive to be installed, which is something only a Windows administrator can do. Typically the Windows administrator prepares an NTFS volume and enables FILESTREAM, and then the SQL Server administrator enables FILESTREAM separately at the server instance level before creating a FILESTREAM-enabled database. This same access level must be specified in both places.

Of course, this is not an issue if you are one person that acts as both the Windows and SQL Server administrator, but otherwise, both need to get along and they need to agree on the access level that FILESTREAM should be enabled for.

Enabling FILESTREAM for Windows

Enabling FILESTREAM for SQL Server

(Part 1 of 4 ) Why to use FILESTREAM or FILETABLE in SQL Server

FILESTREAM enables SQL Server-based applications to store unstructured data, such as documents and images, on the file system. Applications can leverage the rich streaming APIs and performance of the file system and at the same time maintain transactional consistency between the unstructured data and corresponding structured data.

FILESTREAM integrates the SQL Server Database Engine with an NTFS file system by storing varbinary(max) binary large object (BLOB) data as files on the file system. Transact-SQL statements can insert, update, query, search, and back up FILESTREAM data. Win32 file system interfaces provide streaming access to the data.

FILESTREAM uses the NT system cache for caching file data. This helps reduce any effect that FILESTREAM data might have on Database Engine performance. The SQL Server buffer pool is not used; therefore, this memory is available for query processing.

FILESTREAM is not automatically enabled when you install or upgrade SQL Server. You must enable FILESTREAM by using SQL Server Configuration Manager and SQL Server Management Studio.

 EXEC sp_configure filestream_access_level, 2
 RECONFIGURE
 GO
  
To use FILESTREAM, you must create or modify a database to contain a special type of filegroup.
 CREATE DATABASE WebApiFileTable
 ON PRIMARY
 (Name = WebApiFileTable,
 FILENAME = 'E:\filestreamsql\FTDB.mdf'),
 FILEGROUP FTFG CONTAINS FILESTREAM
 (NAME = WebApiFileTableFS,
 FILENAME='E:\filestreamsql\FS')
 LOG ON
 (Name = WebApiFileTableLog,
 FILENAME = 'E:\filestreamsql\FTDBLog.ldf')
 WITH FILESTREAM (NON_TRANSACTED_ACCESS = FULL,
 DIRECTORY_NAME = N'WebApiFileTable');
 GO
  
Then, create or modify a table so that it contains a varbinary(max) column with the FILESTREAM attribute.
 USE WebApiFileTable
 GO
 CREATE TABLE WebApiUploads AS FileTable
 WITH
 (FileTable_Directory = 'WebApiUploads_Dir');
 GO
   
After you complete these tasks, you can use Transact-SQL and Win32 to manage the FILESTREAM data.
 INSERT INTO [dbo].[WebApiUploads]
 ([name],[file_stream])
 SELECT
 'NewFile.docx', * FROM OPENROWSET(BULK N'd:\kk.docx', SINGLE_BLOB) AS FileData
 GO

 select * from WebApiUploads

Friday 22 April 2016

Easy and Smart Modal Validation in Web API 2.0



Today i have written a custom ValidateFilterAttribute that will eliminate some unnessary code in API Controller and save our developing time too. We will simply try to handle all validation from one common piece of code rather then writting every single validation in a controller.
I have used OnActionExecuting(HttpActionContext actionExecutingContext) as this allows me to remove the boilerplate if (!ModelState.IsValid) return from the methods.

Friday 11 March 2016

Component Communication with Input and Output in Angular 2

Component Communication with Input and Output

In Angular 1 we use $broadcast, $emit, $on to communicate between controllers. You can refer my previous blog if you want to know more on this.
We have same similar thing in Angular 2, But the coding is diffrent. Like say to flow data parent to child we will use @Input and @Output to flow data child to parent component. You can update input properties using property bindings [property]. And you can subscribe to output properties using event bindings (event).
You can use it in either "inputs" and "outputs" properties in the @Component decorator or in the Class. It's upto your own taste. We will go through both the way.

input to flow data parent to child

by using input property in @Component decorator

by using @Input decorator

output to flow data child to parent

by using output property in @Component decorator

by using @Output decorator

Download Code

Get the code @ my github

Friday 4 March 2016

Algorithm to find next 10 Business Day

The factors

How many week days per week? This depends on the company/country policies. In the US, it's generally 5 days per week, whereas in India and in many other countries, many companies work for 6 days a week. Our algorithm needs to take that into consideration.

Basic algorithm

  1. Calculate the number of time span in terms of weeks. Call it, W
  2. Deduct the first and last week from the number of weeks. W= W-2
  3. Take number of working day's as WRK
  4. Sum up all the days W+WRK

Lets see it how it work in program

Component in Angular2

Component in Angular2

Angular 1 wasn’t built around the concept of components. Instead, we’d attach controllers to various parts of the page with our custom logic.

In Angular 2, It drops all of this for a much cleaner, more object oriented Component model.

If you are familear with OOP pattern, then you will immediately understand that component is just a class that represents an element on the screen, with member-data that influences the way it looks and behaves.

Now let's go through the Example below and we will try to understand Component and How to do Nesting Component.

Angular2 Event Binding Demo

Download Code

Get the code @ my github

Wednesday 2 March 2016

Event Binding in Angular 2

Event Binding Nature

You can hook into just about any DOM-based event using the native event name like (click), (mouseup) and so on. And, you can even use this approach to bind to any DOM-based event that another directive might emit on the DOM tree.

Angular2 Event Binding Demo

Download Code

Get the code @ my github

Closing Stock

I haven't queried SQL since year. Today i got a call from my friend. he was struggling in querying Stock Adjustment thing's. and it's like feels good that even after a gap of one year i am good at sql and able to write sql with the same efficiency. :)

This is what i did to help him

Create Table

  CREATE TABLE [dbo].[MIS_COAL_DTLS](
   [ID] [int] IDENTITY(1,1) NOT NULL,
   [COAL_UNIT1] [decimal](14, 3) NULL,
   [COAL_UNIT2] [decimal](14, 3) NULL,
   [COAL_RCPT] [decimal](14, 3) NULL,
   [CLOSING_STOCK] [decimal](14, 3) NULL,
   [ENT_DATE] [date] NULL,
   [FIN_YEAR] [varchar](50) NULL,
   [CAL_YEAR] [int] NULL
  ) 

Table data

Cursor for getting Closing Stock

  DECLARE @ID int
  DECLARE @COAL_UNIT1 varchar(50)
  declare @COAL_UNIT2 varchar(50)
  declare @COAL_RCPT [decimal](14, 3)
  declare @CLOSING_STOCK  [decimal](14, 3)
  declare @ent_date date
  DECLARE @newclosingstock [decimal](14, 3)
  declare @openingstock [decimal](14, 3)

  DECLARE cur_emp CURSOR
  STATIC FOR 
  SELECT ID,COAL_UNIT1,COAL_UNIT2, COAL_RCPT, CLOSING_STOCK,ent_date from MIS_COAL_DTLS
  OPEN cur_emp
  IF @@CURSOR_ROWS > 0
   BEGIN 
   FETCH NEXT FROM cur_emp INTO @Id, @COAL_UNIT1, @COAL_UNIT2, @COAL_RCPT, @CLOSING_STOCK,@ent_date
   WHILE @@Fetch_status = 0
   BEGIN
   select @openingstock = closing_stock from MIS_COAL_DTLS where ent_date >= dateadd(day,datediff(day,1,@ent_date),0)
    and ent_date < dateadd(day,datediff(day,0,@ent_date),0)
   set @newclosingstock = (@openingstock + @COAL_RCPT) - (cast(@COAL_UNIT1 as decimal(14,3)) + cast(@COAL_UNIT2 as decimal(14,3)));
   if @newclosingstock is not null
       update  MIS_COAL_DTLS set CLOSING_STOCK = isnull(@newclosingstock,0) where id = @ID;
      --print @newclosingstock
   FETCH NEXT FROM cur_emp INTO @Id, @COAL_UNIT1, @COAL_UNIT2, @COAL_RCPT, @CLOSING_STOCK,@ent_date
   END
  END
  CLOSE cur_emp
  DEALLOCATE cur_emp

Saturday 27 February 2016

Comparing Angular 1 and Angular 2 Side by Side

Comparing Component and Controller

Angular 2

The Component

Controllers are a big part of Angular 1 that is going away in Angular 2. In Angular 2 you will probably write all your controllers as components.


  import {Component} from 'angular2/core'

  @Component({
    selector: 'my-app',
    providers: [],
    template: `
      <div>
        <h2>Hello {{name}}</h2>
      </div>
    `,
    directives: []
  })
  export class App {
    constructor() {
      this.name = 'Angular2'
    }
  }
                

    <my-app>
      loading...
    </my-app>
                

Angular 1

The Controller


var app = angular.module('app', []);

app.controller('MainCtrl', function($scope) {
  $scope.name = 'Hello Angular1';
});
                

  <body ng-controller="MainCtrl">
    <h2>{{name}}</h2>
  </body>
              

Structural Directives

Angular 2

*ngFor, *ngIf


    <ul>
      <li *ngFor="#ball of balls">
        {{ball.name}}
      </li>
    </ul>
    <div *ngIf="balls.length">
      <h3>You have {{balls.length}} balls</h3>
    </div>
            

Angular 1

ng-repeat, ng-if


    <ul>
      <li ng-repeat="ball in balls">
        {{ball.name}}
      </li>
    </ul>
    <div ng-if="balls.length">
      <h3>You have {{balls.length}} ball </h3>
    </div>
                

Two-Way Data Binding

Angular 2

[(ngModel)]='value'


    <input [(ngModel)]="me.name">
            

Angular 1

ng-model='value'


    <input ng-model="me.name">
                

Property Binding

Angular 2

[Property]='Property'


    <div [style.visibility]="tools ? 'visible' : 'hidden'">
      <img [src]="imagePath">
      <a [href]="link">{{tools}}</a>
    </div>
            

Angular 1

ng-property='Property'


    <div ng-style="tools ? {visibility: 'visible'}: {visibility: 'hidden'}">
        <img ng-src="{{tools}}">
        <a ng-href="{{tools}}">
          {{tools}}
        </a>
    </div>
                

Event Binding

Angular 2

(event)='action()'


    <input
      (blur)="log('blur')"
      (focus)="log('focus')"
      (keydown)="log('keydown', $event)"
      (keyup)="log('keyup', $event)"
      (keypress)="log('keypress', $event)"
      >
                

Angular 1

ng-event='action()'


        <input
          ng-blur="log('blur')"
          ng-focus="log('focus')"
          ng-keydown="log('keydown', $event)"
          ng-keyup="log('keyup', $event)"
          ng-keypress="log('keypress', $event)"
          >
            

Services and DI

Angular 2

Injectable Service

In Angular 1 we use services by using any one of Factory, Services, Providers, Constants, Values which all are covered under a provider.

But in Angular 2 all this are consolidated into one base Class.


  import {Injectable} from 'angular2/core';
  
  @Injectable()
  export class StudentService {
    getStudents = () => [
      { id: 1, name: 'Nisar' },
      { id: 2, name: 'Sonu' },
      { id: 3, name: 'Ram' }
    ];
  }
            

Using same service in Component


  import { Component } from 'angular2/core';
  import { StudentService } from './student.service';
  
  @Component({
    selector: 'my-students',
    templateUrl: 'app/student.component.html',
    providers: [StudentService]
  })
  export class StudentsComponent {
    constructor(
      private _StudentService: StudentService) { }
    students = this._StudentService.getStudents();
  }
            

Angular 1

Service


  (function () {
    angular
      .module('app')
      .service('StudentService', StudentService);

    function StudentService() {
      this.getStudents = function () {
        return [
          { id: 1, name: 'X-Wing Fighter' },
          { id: 2, name: 'Tie Fighter' },
          { id: 3, name: 'Y-Wing Fighter' }
        ];
      }
    }
  })();
            

Using same service in Controller


  (function () {
    angular
      .module('app', [])
      .controller('StdController', StudentsController);
  
    StudentsController.$inject = ['StudentService'];
    function StdController(StudentService) {
      var std = this;
      std.title = 'Services';
      std.Students = StudentService.getStudents();
    }
  })();