• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Friday, 29 Mar 2013 09:07

Some amazing books which I am keeping with me these days for reference and going through portions whenever I get the opportunity.

Amazon List: http://amzn.com/w/LJTHFOO05T4U

TITLE

C# in Depth, Second Edition by Jon Skeet

CLR via C#, Second Edition (Pro Developer) by Jeffrey Richter (Author)

Framework Design Guidelines: Conventions, Idioms, and
Patterns for Reusable .NET Libraries (2nd Edition)
 
by Krzysztof Cwalina, Brad Abrams

Microsoft® .NET: Architecting Applications for the
Enterprise (Pro-Developer)​
 
by Dino Esposito, Andrea Saltarello

Microsoft® Application Architecture Guide, 2nd Edition
(Patterns & Pra​ctices)
 
by Microsoft Patterns & Practices Team (Author)

NHibernate 3.0 Cookbook by Jason Dentler

Patterns of Enterprise Application Architecture by Martin Fowler

Professional ASP.NET Design Patterns by Scott Millett

Programming Entity Framework: DbContext by Julia Lerman, Rowan Miller

 

Author: "aleem"
Send by mail Print  Save  Delicious 
Date: Thursday, 14 Mar 2013 14:44

nhibernate

Nhibernate is one of the many Object-Relational Mapping (ORM) tools for the .NET platform.

A very quick description of ORM:

ORM is the process of mapping objects to database relations without the requirement of writing SQL statements (such as SELECT, INSERT, UPDATE, DELETE, JOIN, etc etc) generally through the use of XML files.

Nhibernate eliminates the requirement of using ADO.NET, Linq, etc and thus is the back bone of the Data Access Layer (in nThier Architectures).

Requirements for this Tutorial:

This tutorial will be split in 12 steps:

Step 1:- Create a 2-Thier Project

Step 2:- Install Nhibernate

Step 3:- Define Business Objects (Domain Folder)

Step 4:- Define the Mapping (Mappings Folder)

Step 5:- Setting XML Schema

Step 6:- Configure Nhibernate

Step 7:- Create ISessionFactory

Step 8:- Create Table from Schema and Test Connection

Step 9:- Create Repositories

Step 10:- Define Method from Repositories

Step 11:- Testing Add, Update, Delete Method

Before we continue, I want to add that “I know this is a very long and mostly text based tutorial” but I have searched the internet sooo much for a decent working tutorial without errors and complications that this post here should cover it ALL … I also did it again while writing it to make sure no errors are found. So, hold on and read through… EVERYTHING should be there :)

So, lets begin …

Step 1:- Create a 2-Thier Project

1. New Web Application and name solution to “MyFirstNhibernate”

2. Add a new Class Library project to the solution and name it “DataLayer” which will be used to connect to the database through Nhibernate

3. Rename the Web Layer to “PresentationLayer” and add reference to the “DataLayer” Project

4. In the “DataLayer” Class Library add the following folders:

– Design (will contain class diagram of all classes in the Domain Folder)

– Domain (will contain classes to represent tables and their properties in the database)

– Mappings  (will contain xml files to map to tables in database through Nhibernate)

– Repositories (method implementation for Select, Insert, Update and Delete in tables of the database)

Step 2:- Install Nhibernate

Nhibernate has to be installed for both Thiers. Make sure you install NuGet to be able to complete this step:

Visual Studio Tools >> Library Package Manager >> Manage NuGet Packages for Solution >> type “Nhibernate” >> Add Nhibernate

1-NuGet-Install-NHibernate

Step 3:- Define Business Objects (Domain Folder)

In the folder “Domain”, add a new class and name it “Book.cs” and copy the following properties:


public class Book
{
public virtual Guid Id { get; set; }
public virtual string Title { get; set; }
public virtual string Author { get; set; }
}

Step 4:- Define the Mapping (Mappings Folder)

Add XML file and make sure to name it with the right convension naming for nhibernate to auto recognize it is a mapping file: use ending .hbm.xml

Lets name it “Book.hbm.xml” for this example to match the Book class in the Domain.

IMP: Click on the newely created xml file, on properties, select  “Embedded Resource” for Build Action

Step 5:- Setting XML Schema and properties

Now we need to set the XML schema for the xml we just created. Lets start by importing the nhibernate xml schema as follow:

1. In Windows Explorer: Open the folder of your project

2. Click packages >> click on the Nhibernate.3.3.2 folder

3. Copy the nhibernate-mapping.xsd file

4. Go to the main directory of your Class Library “DataLayer” and paste it.

So, now we need to define that schema in our mapping  XML file, and that should look like:


<?xml version=”1.0″ encoding=”utf-8″ ?>


<hibernate-mapping xmlns=”urn:nhibernate-mapping” assembly=”MyFirstNhibernate” namespace=”DataLayer.Domain”>
<!–Your Mapping Code Here –>
</hibernate-mapping>

Once the schema has been defined, we need to create entity mapping by listing all columns and their properties In the <!–Your Mapping Code Here –> above:

<class name=”Book”>
<id name=”Id”>
<generator class=”guid” />
</id>
<property name=”Title” not-null=”true” />
<property name=”Author” not-null=”true” />
</class>

 

Step 6:- Configure Nhibernate to SQL Server 2008

Create a new XML file in the main directory of your project and name it hibernate.cfg.xml

On the propertied of hibernate.cfg.xml change the setting “Copy to Output directory” to “Copy Always”

In the hibernate.cfg.xml copy the following Connection:

<hibernate-configuration xmlns=”urn:nhibernate-configuration-2.2″>
<session-factory>
<property name=”connection.provider”>NHibernate.Connection.DriverConnectionProvider</property>
<property name=”connection.driver_class”>NHibernate.Driver.SqlClientDriver</property>
<property name=”connection.connection_string”>Data Source=XXX.XXX.XXX.XXX;Initial Catalog=DBname;MultipleActiveResultSets=True;Persist Security Info=True;User ID=username;Password=********</property>
<property name=”dialect”>NHibernate.Dialect.MsSql2008Dialect</property>


<property name=”show_sql”>true</property>
</session-factory>
</hibernate-configuration>

P.S do not forget to change the connection string with the right values for the IP, Catalog, UserId and password

Copy this hibernate.cfg.xml file from the DataLayer into the Presentation Layer where we will test connection to the database.

Step 7:- Create ISessionFactory (NhibernateHelper)

Create a new class in the main directory of the Class Library and name it “NhibernateHelper.cs” which will create a session factory when the client makes his first request a new session (ONLY ONCE)

Add the following references to the NhibernateHelper class:


using NHibernate;
using NHibernate.Cfg;
using DataLayer.Domain;

Make sure the class is made public and copy the following ISessionFactory methods:


private static ISessionFactory _sessionFactory;


private static ISessionFactory SessionFactoryBook
{
get
{
if (_sessionFactory == null)
{
var configuration = new Configuration();
configuration.Configure();
configuration.AddAssembly(typeof(Book).Assembly);
_sessionFactory = configuration.BuildSessionFactory();
}
return _sessionFactory;
}
}


public static ISession OpenSessionProduct()
{
return SessionFactoryBook.OpenSession();
}

Step 8:- Create Table from Schema and Test Connection

Now lets check if our code is working and to do so we will create the Book table from the schema created.

In the “DataLayer” create a new class and name it “SchemaGenerator.cs”, add the following references:

using NHibernate.Cfg;
using NHibernate.Tool.hbm2ddl;
using DataLayer.Domain;

And paste the following method (make sure to make the class public):

public void generate_book_schema()
{
var cfg = new Configuration();
cfg.Configure();
cfg.AddAssembly(typeof(Book).Assembly);


new SchemaExport(cfg).Execute(false, true, false);
}

Now, lets go to the “PresentationLayer” and on the Default page add a button “btn_Create_Book_Table”.

On the on_click event call the method we just created in the SchemaGenerator.cs which should look something like:

new DataLayer.SchemaGenerator().generate_book_schema();

Run this page and click the button.

Finally, check your tables in the database and you should see the table “Book” listed within the tables :)

Step 9:- Create Repositories (Domain Folder)

Repository interface is part of the domain, while implementation will be held in a sub folder of the domain to keep the domain persistant ignorant (PI).

In the Domain Folder, Add>> New Item>> Interface >> name it  “IBookRepository”

Set the interface to public and add the following code for the Select, Insert, Update and Delete (a.k.a CRUD create, read, update, delete):

public interface IBookRepository
{
void Add(Book book);
void Update(Book book);
void Delete(Book book);
Book GetById(Guid Id);
}

Step 10:- Define Method from Repositories (Repositories Folder)

Add a new class and name it “BookRepository” where implementation of each method will take place.

Unless we  inheritance from IBookRepository which we created above, it will give us an error so do not forget to inherit the interface.

So lets implement the first method, Add:

public void Add(Book book)
{


using (ISession session = NhibernateHelper.OpenSessionBook())
using (ITransaction transaction = session.BeginTransaction())
{
try
{
session.Save(book);
//Save Changes in Database
transaction.Commit();
}
catch (Exception ex)
{
//If error occurs, all changes will be reverted
transaction.Rollback();
}
}

}

So thats our first method which creates an ISession for transactions which allow us to rollback if an exception occurs. This is mostly useful when update multuple tables.

Now implement the Update and Delete which are pretty much the same expect you will need to change the session.Save(book) accordingly to session.Update(book) and session.Delete(Book).

Last method is the Get Row by a specific Id which in this is using a GUID / uniqueidentifier:

public Book GetById(Guid Id)
{
using (ISession session = NhibernateHelper.OpenSessionBook())
return session.Get<Book>(Id);
}

Unless you implement all 4 methods, a similar error to the following will keep showing up:

‘DataLayer.Repositories.BookRepository’ does not implement interface member ‘DataLayer.Domain.IBookRepository.Update(DataLayer.Domain.Book)’

Step 11:- Testing Add, Update, Delete Method

1. Testing the ADD Method:

3.Add-Item

In our Web Project, the “PresentationLayer” in the Default page or any other page you prefer add the following table:

<fieldset>
<legend> Book Details: </legend>
<table>
<tr>
<td>Title:</td>
<td><asp:TextBox ID=”txt_title” runat=”server”></asp:TextBox></td>
</tr>
<tr>
<td>Author:</td>
<td><asp:TextBox ID=”txt_author” runat=”server”></asp:TextBox></td>
</tr>
<tr>
<td colspan=”2″><asp:Button ID=”btn_add_book” runat=”server” Text=”Add New Book”
onclick=”btn_add_book_Click” /></td>
</tr>
</table>
</fieldset>

In your code behind add the on_click event and the required references to DataLayer.Domain and DataLayer.Repositories:

protected void btn_add_book_Click(object sender, EventArgs e)
{
var book = new Book { Title = (txt_title.Text), Author = (txt_author.Text) };
IBookRepository repository = new BookRepository();
repository.Add(book);
}

Now, run the page in your browser and click the button “Add New Book”

Open your database >> Right-Click table Book >> Show Table Data:

4.Add-Item-confirm-db

and there is our first row!!!!

2. Testing the Update Method:

Lets first copy the Guid (Id) of our first book and paste it somewhere easy to find so that we can edit that row.

In the same table we created above, add the following row:

<tr>
<td colspan=”2″><asp:Button ID=”btn_update_book” runat=”server” Text=”Update Book”
onclick=”btn_update_book_Click” /></td>
</tr>

and in the code behind lets add its on_click event:

protected void btn_update_book_Click(object sender, EventArgs e)
{
var book = new Book { Id = new Guid(txt_id.Text), Title = (txt_title.Text), Author = (txt_author.Text) };
IBookRepository repository = new BookRepository();
repository.Update(book);
}

5.Update-Item

Now, in your browser copy and paste the guid in the ID textbox, and the new title and author… click “Update Book”

Lets confirm the update, refresh the table content and you should see your changes there:

6.Update-Item-confirm-db

3. Testing the Delete Method

Lets add a new button “Delete Book” to the current html table:

<tr>
<td colspan=”2″><asp:Button ID=”btn_delete_book” runat=”server” Text=”Delete Book”
onclick=”btn_delete_book_Click” /></td>
</tr>

Also as we did before, add the on_click event in the code behind:

protected void btn_delete_book_Click(object sender, EventArgs e)
{
var book = new Book { Id = new Guid(txt_id.Text) };
IBookRepository repository = new BookRepository();
repository.Delete(book);
}

Run in your browser >> copy and paste the same guid so that we delete that same row:

7.Delete-Item

The ID is enough for it to work, click “Delete Book” button and refresh your table one last time:

7.Delete-Item-confi

4. Testing SELECT ( Get Book By ID )

Our last method to test is the Select row by ID, so lets start by adding a new button to the html table:

<tr>
<td colspan=”2″><asp:Button ID=”btn_get_book_by_id” runat=”server” Text=”Get Book By ID”
onclick=”btn_get_book_by_id_Click” /></td>
</tr>

Add the code behind:

protected void btn_get_book_by_id_Click(object sender, EventArgs e)
{
Guid Id = new Guid(“4823c5b6-7b70-459b-9a8b-86beb624a623“);     //Replace GUID with your Id
IBookRepository repository = new BookRepository();
var book = repository.GetById(Id);

txt_title.Text = book.Title;
txt_author.Text = book.Author;
}

Now we need to add a new row in the Book table, so run the Page, fill the Title and Author and click “Add New Book”

Refresh the table and copy the Id (guid)

Paste the GUID in the code behind where we have Guid Id = new Guid(“”);

Run the Page again and click “Get Book By ID”

8.Select-Item

As in the screenshot above, the Title and the Author of the given ID will be displayed.

And that is done .. hope you did not encounter any issues which have not been covered in this tutorial. If so do not hasitate to ask or if you have the solution to it, helpful comments are appreciated.

Goodluck and enjoy :)

 

Download <a href=”http://www.twiggle-web-design.com/tutorials/nhibernate/nhibernate.html“> Project Demo </a>

 

Author: "rochcass"
Send by mail Print  Save  Delicious 
Date: Tuesday, 05 Mar 2013 11:04

I’ve recently moved our Membership data to NHibernate on SQL Azure, and wanted a secure way to store sensitive “safely”.

My starting point for this was the excellent article at http://nhforge.org/blogs/nhibernate/archive/2009/02/22/encrypting-password-or-other-strings-in-nhibernate.aspx

The only problem is that the default implementation does not salt the encrypted data – so identical plain text values are identical when encrypted and this can make cracking the data easier.

A salt is a random value that is used to seed the encryption algorithm to avoid this. The salt should be randomly generated (ie, not derived from the data itself) and can be stored in an insecure way as knowing it doesn’t help you particularly when trying to crack the encryption.

I spent ages going through different routes to achieve this:

- Adding a salt column to the entity schema – FAIL – because at the point of decryption it’s not possible to read from other columns.

- Writing an interceptor and “transparently” encrypting and decrypting data when loaded and saved – FAIL – didn’t seem to be working in a whole load of ways – probably my bad

- Adding a whole bunch of fields to the entity model that aren’t persisted and allow programatic access to decrypted strings in conjunction with a salt column – FAIL – just plain messy

In the end the answer was staring me in the face and popped into my head when I woke up the next day – proving working at something till 1am isn’t always the best option!

The solution wasn’t actually in the encrypted field code (http://unhaddins.googlecode.com/svn/trunk/uNhAddIns/uNhAddIns/UserTypes/EncryptedString.cs) but actually in the encryptor code.

I modified the code so that on encryption a random salt string is generated and used to encrypt, but is then prepended to the resulting string. The encryptor result is now of the form <salt>:<encrypted data>.

On decryption all that’s needed is to split the string and reverse the process using the correct salt.

The only negative is that it’s not possible to query across the dataset using the encrypted fields as the salt is field specific so can’t be pre-calculated and used in the SQL command. It’s necessary to retrieve the whole dataset, which is transparently decrypted and then query accross this in the application.

We only use encryption for fairly small tables – user data generally so this is a hit we’re prepared to take to protect client data.

Author: "vlearningsolutions"
Send by mail Print  Save  Delicious 
Date: Thursday, 21 Feb 2013 11:00
This article will discuss on how I achieved using a Sybase database alongside SQL Server with S#arp
Author: "Raymund"
Send by mail Print  Save  Delicious 
Date: Friday, 08 Feb 2013 14:33

Learning How to be a Better Developer

I see myself as a decent-ish software developer. I am great at the things I do know but I have been a developer long enough to build a system without taking advantage of the full set of language features available to me. I have neglected to learn some of the more advanced language features in .NET, some of the valuable patterns and practices that have been discovered  as well as a lot of the tools that are available to build better software.

This will be my last post for the week and I have a long enough list of objectives to keep me busy for a while.

As I mentioned in my initial post, I am using this blog as a self evaluation tool. If you are reading this and learning from it, GREAT! If you are bored, skip to the next post. This blog is almost like a pocket notepad for me so I know that some of these posts are going to be a bit tedious.

Objectives:

Admin

I have 31 pages on hand written notes from learning in the past two weeks. I need to organise them and type them up so that I can refer back to them easily. I will use Evernote for this.

Broad Learning

Technical Learning

  • Read summaries of StructureMap, Spring.NET, Unity and Ninject. While I feel like I’ve already decided on Castle Windsor as a DI container, I don’t know anything about any other ones so it would be best to have a quick peruse through them.
  • Gather more insight into using Entity Framework, ActiveRecord and NHibernate and maybe to a small tutorial on each.
  • Read about the Logging Facility in Castle Windsor as Simone Busoli makes good mention of it in this article.
  • Learn how to properly implement IInitializable/ISupportable and IDisposable interfaces
  • Start learning how to use Regular Expressions

As you can see my plate is full.

Progress

I have made some progress though. Here is what I’ve learned so far:

  • I now understand the concept of test first development and how to implement it is simple scenarios
    • Defining units of code
    • Red – Green – Refactor Cycle
    • Using NUnit
    • Differentiating between unit tests and integration tests
    • Testing the sad path
    • Refactoring with great help from ReSharper
  • I’ve started using ReSharper to increase my productivity
  • I’ve brushed over behavioural driven design which is really just a more refined way of looking at test driven design
  • I’ve schooled myself on the following patterns and methods:
    • Builder Design Pattern
    • Template Method Design Pattern
    • Strategy Method
    • Factory Method
    • Adapter Pattern
    • Decorator Pattern
  • I now understand Dependency Injection
  • I’ve learned how to use two C# keywords: yield and lock
  • I’ve learned how to use the Castle Windsor container.
    •  I’ve learned about the out of box lifestyles and the lifecycle of a component inside the container
    • I’ve learned how to inherit from Castle Windsor container to create a custom container
    • I’ve learned how to use configuration files to inject dependencies into the container and change behaviour of the component and change the injected dependencies using pre-processor directives
    • I’ve learned how to build a custom type converter for a component
    • I’ve learned how to use two semantic facilities: Factory Support and Startable

So not too shabby on my part.

Catch you mofo’s on the flip side.

BEAN

Author: "bean"
Send by mail Print  Save  Delicious 
Date: Friday, 08 Feb 2013 11:22

Dependency Injection & Castle Windsor

As I mentioned in my last post, one of my goals was to school myself on inversion of control (IoC) so that I could start practicing it in my daily work. I also mentioned that IoC is an incredibly broad and vague practice that can be implemented using a number of methods such as:

  • Using a service locator
  • Using contextualised lookup
  • Using dependency injection

Well I started finding this topic hard to wrap my head around mainly because I couldn’t put it into practice based on the definitions provided by Guru’s like Martin Fowler. You see guy’s like this, are very careful about putting their blinkers on and concentration too much on the practical context as they need to cover all bases. So after a few questions I posted on StackOverflow I’ve actually made some real progress. Firstly, I have been told that modern languages such as C# and Java are great for using dependency injection while service locator and contextualised lookup were recognised as practices to cater IoC for older or less fully OO languages. That was when I decided to focus on learning about how to practice dependency injection.

This led me on a journey to find the best container framework for dependency injection The big ones are:

  • Castle Windsor
  • StructureMap
  • Spring.NET
  • Unity
  • Ninject
  • PicoContainer

And there are quite a lot more. I went for a beer with Susan on Wednesday and he says that they use Castle Windsor because it seemed to be the most feature rich and mature container for their needs at the time. I’ve been told that Spring.NET is the most ‘enterprisy’ one (whatever that means) but that it requires more work to set it up. I am one man who needs to climb a mountain of refactoring so the long path just doesn’t seem like the right path. Having said that, taking shortcuts would be, by far, a worse decision. I learned this the hard way by using LINQ2SQL as the ORM for Behemoth.

PicoContainer and Ninject have been described as light containers that perform well. With my very limited knowledge and gut, I decided to look at Castle Windsor because Martin Fowler describes it as ‘well documented’, there are a lot of project teams that seem to be using it and it just feels like a mature product with the right amount of well considered features.

So far I had decided two things:

  1. Learn and focus on dependency injection to apply IoC practices in my code
  2. Learn how to make good use of Castle Windsor

This led me onto David Siew’s article on getting started with Castle Windsor which in turn led me to this magnificent piece of work by Simone Busoli. Thanks guys, I have worked through my first Castle Windsor tutorial and feel ready to start leveraging the full power of what seems to be an incredible container.

I would not try and regurgitate what I’ve learned here because why plagiarise? So far, everything that I know about Castle Windsor is courtesy of Simone Busoli. If you want to get started using this container, work through this 4 part tutorial. It is a bit dated and Castle Windsor has changed a bit since it was written but I had a bit of fun figuring out what has changed and how to accommodate to the current version.

What About Actually Working For a Change?

Well, I have to say, for the past two weeks I have been neglecting a lot of work in favour of reading, learning and participating in StackOverflow.

pretty serious

I need to find a balance between work and learning so that the money doesn’t totally dry up in the meantime.

I’ve asked a few questions about what ORM I should use to refactor Behemoth posing the two options of Entity Framework and NHibernate.

Susan reckons NHibernate is great because it provides more granular control over the generated code while Microsoft as tried to include too much in Entity Framework. I’m still blind to this. All I know is LINQ2SQL and that LINQ2SQL is the wrong option for medium-large database systems. The reason I say this is because it is too rich. It does so much and executes so much code in normal, everyday operations that you have to tiptoe around it when your requirements start becoming a bit more complex. I guess the main reason for this is that it implements INotifyPropertyChangeda as well as INotifyPropertyChanging and when you get an inexperienced dev that inserts a lot of processing into a get or set operator, it starts getting a little crazy.

Look I am not dissing LINQ2SQ. It is a fantastic ORM that has tided us over for 6 years and I would hands down recommend it for small-medium projects. It translates lambda syntax into SQL LIKE A BOSS! In fact, it outperforms most DBA’s that I know! But the time has come to recognise that Microsoft is trying hard to shift developer focus away from LINQ2SQL in favour of Entity Framework.

So I haven’t decided yet. Here are the pros and cons:

Entity Framework

Pros:

  • Developed by Microsoft for Microsoft Language Users
  • Already in Version 4 (doesn’t look like it’s going away anytime soon)
  • Well integrated with Visual Studio
  • Lot’s of learning material and well documented

Cons:

  • I don’t know of too many big project teams using it
  • I’ve been told that, similar to LINQ2SQL, it is incredibly feature rich at the expense of performance
  • The Jedis on Stack Overflow seem to favour NHibernate

 

NHibernate

Pros:

  • Mature product
  • Used by many large project teams who speak volumes about it
  • Well integrated with Castle Windsor
  • The Jedis on Stack Overflow sing it’s praises

Cons:

  • Not as well integrated into VS
  • Worried that Microsoft will take what they’ve done and produce something better soon
  • I naturally resist products that aren’t from Microsoft as I have been brainwashed over the past twenty years to believe that the Microsoft way is the most intuitive way.

Any Other ORM’s

Ok so you know that I am considering 2 big shot ORMs to use in Behemoth. Would you suggest any others that are better than either of these? If so, let me know and tell me why. I’m hungry to learn.

Side Note

On a tangent, I just want to say that my fiancé made the most awesome salami, cheese and mustard sandwiches for my lunch today. Thank you my love, they were the BOMB.COM!

 

Anyway, while I salivate over the deliciousness of those sandwiches I just ate, you go drink your soup or whatever.

Adiosh!

BEAN

 

Author: "bean"
Send by mail Print  Save  Delicious 
Date: Tuesday, 29 Jan 2013 10:29

There are many ways to connect to an Oracle database from a .NET application. You can use ADO .NET directly, or use an ORM solution, such as NHibernate or the likes. In my workplace, previously we are using NHibernate. But for some reason, we stop using it and revert back to the old ADO .NET way.

However, after some experiments, I’ve found my own way to work with Oracle in a .NET environment. I’m using a library called BLToolkit, along with several other tools. After reading their documentation, I’m convinced enough to use it, and so far I’m very satisfied with the result.

Our development approach is database-first, and this article will only cover that. So if you are using code-first approach, this article would not be relevant to you.

The first thing we need is a data model class that maps to our Oracle database. Since we are already spending our precious time designing the database, we want the class to be auto-generated from the database. BLToolkit provides this kind of functionality using a T4 template. Unfortunately, they don’t support generating data model class directly from an Oracle schema. Since I’m not capable enough to write a T4 template to achieve this, I need a workaround.

My workaround here, is by using SQL Server and a utility called SSMA for Oracle. The idea here is by importing our Oracle database to an SQL Server database, and generate a data model from there. For this purpose, the SQL Server Express Edition is sufficient, so you don’t have to worry about licensing issues.

Importing the database

  1. Open SSMA for Oracle and create a new project.clip_image001
  2. Click Connect to Oracle, and provide the information to connect to the Oracle server.

    clip_image002

    Depends on what user you use to connect to Oracle, you might be shown this warning. In my case, I can safely ignore it and click Continue.

    clip_image003

  3. Click Connect to SQL Server, and provide the information to connect to the SQL Server.

    clip_image004

    If you are connecting to SQL Server Express, like me, you will get this warning. Just ignore it and click Continue.

    clip_image005

  4. Optionally, you can change the type mapping between Oracle and SQL Server. I usually change the mapping of number to decimal instead of the default float. The reason for this is because I usually use number as a primary key in Oracle. If it is mapped as float in SQL Server, then it would lost its primary key status because SSMA doesn’t seems to allow float to be used a primary key.

    clip_image006

  5. If you made a change to the Oracle schema after connecting to Oracle, you can refresh them by right-clicking the schema name and select Refresh from Database.

    clip_image007

  6. Tick the checkbox next to the schema(s) you want to import, and click Convert Schema.

    clip_image008

    Optionally, you could also change the target schema in SQL Server.

    While the process is running, you can inspect it through the Output window.

  7. On the SQL Server Metadata Explorer, you’ll see the target schema produced by the conversion process. Right click on it, and select Sync with Database. This process will actually update our SQL Server with the new definition.

    clip_image009

  8. Done. Now you should be able to see your shiny new database (or an up-to-date version of the existing ones) in SQL Server.

Handling the problems

You may encounter some issues while doing this conversion process (you can see them in the Error List Window). Once, I got an error setting a float column as a primary key. Changes the type mapping of number from float to decimal (as described in step 4 above) solve this.

However, some issues might not be so important to us. Keep in mind, the main reason we are doing this process are merely to enable us to auto-generate our data model class. So as long as the SQL Server tables has the same column name, column type, primary key, and foreign key specification as their counterpart in Oracle, then it should be good enough for us.

On the next article, I will cover on how to generate the data model class using the T4 template provided by BLToolkit.

Author: "andri"
Send by mail Print  Save  Delicious 
Date: Monday, 28 Jan 2013 22:34

NHibernate’s inverse concept is one of  most discussed mappings features (if not the most!) defined by it. In practice, you’ll be using the inverse when you need to set the owner of a “relationship”. Before taking a dive into it and how it affects the way NH generates SQL and manages relationships between entities, here are some facts about inverse:

  • It’s a Boolean attribute used only in collection  and join mappings;
  • By default, it’s set to false;
  • It only makes sense when you’re configuring bidirectional relationships between entities;
  • You should only set inverse to true on one of the sides of the relationship;
  • Not setting it on any of the sides will generate superfluous SQL instructions.

Ok, before trying to understand what this means, lets start with a RDE (ie, a really dumb example): Blogs and posts will do it for now!

public class Blog {
    public Int32 BlogId { get; set; }
    public Int32 Version { get; set; }
    public String Description { get; set; }
    private IList<Post> _posts = new List<Post>();

    public IEnumerable<Post> Posts {
        get { return new ReadOnlyCollection<Post>(_posts); }
        set { _posts = new List<Post>(value); }
    }

    public void AddPost(Post post) {
        post.Blog = this;
        _posts.Add(post);
    }

    public void RemoveAll() {
        foreach (var post in _posts) {
            post.Blog = null;
        }
        _posts.Clear();
    }
}

public class Post {
    public Int32 PostId { get; set; }
    public String Description { get; set; }
    public Blog Blog { get; set; }
}

As you can see, there’s nothing too fancy going on here. Each blog has a collection of posts (with each post referencing back its parent blog) and you’ll typically use the AddPost method to add a new Post. After building the classes, we can concentrate in building the mappings. And nothing like Fluent NHibernate to give us a hand. Since I’m not really a great fan of automapping, here’s the code I’ve written to get us started:

public class PostMapping : ClassMap<Post> {
    public PostMapping() {
        Table("Post");
        Not.LazyLoad();
        Id(p => p.PostId)
            .GeneratedBy.Identity()
            .Default(0);
        Map(p => p.Description);
        References(p => p.Blog, "BlogId")
            .Not.LazyLoad();
    }
}

public class BlogMapping : ClassMap<Blog> {
    public BlogMapping() {
        Table("Blog");
        Not.LazyLoad();

        Id(b => b.BlogId)
            .GeneratedBy.Identity()
            .Default(0);
        Map(b => b.Description);
        Version(b => b.Version);
        HasMany(b => b.Posts)
            .Access.CamelCaseField(Prefix.Underscore)
            .AsBag()
            .Cascade.AllDeleteOrphan()
            .Not.LazyLoad()
            .KeyColumn("BlogId");
    }
}

This is more than enough for you to reproduce the tables (if you’re interested in running the examples). I’m also not showing the code I’ve written to create NH’s session factory from these mappings (Fluent’s wiki shows you how to get started, so I’m not repeating it here). Having said that, lets start with the code required to create a new Blog:

private static void CreateNewBlog() {
    var sessionFactory = SessionFactory.CreateSessionFactory();
    using (var session = sessionFactory.OpenSession()) {
        using (var tran = session.BeginTransaction()) {
            var blog = new Blog() {Description = "Testing blog"};
            var post = new Post() {Description = "Post 1"};
            blog.AddPost(post);
            session.SaveOrUpdate(blog);
            tran.Commit();
        }
    }
}

If we were writing the SQL by hand, we could probably agree that 2 SQL instructions would be more than enough for getting the entities stored in the database: we’d start by saving Blog, getting its ID and then we’d insert all its associated posts into the Post’s table (oh, and since Post is also an entity, with probably get its ID too since I’m using autogenerated IDs – not the best of choices for real world projects, but more than enough for our current discussion). The following snippet shows the SQL generated by NHibernate to save the previous objects to the database:

INSERT INTO Blog (Version, Description) VALUES (@p0, @p1);@p0 = 1 [Type: Int32 (0)], @p1 = ‘Testing blog’ [Type: String (0)]    
select @@IDENTITY    
INSERT INTO Post (Description, BlogId) VALUES (@p0, @p1);@p0 = ‘Post 1′ [Type: String (0)], @p1 = 4 [Type: Int32 (0)]    
select @@IDENTITY    
UPDATE Post SET BlogId = @p0 WHERE PostId = @p1;@p0 = 4 [Type: Int32 (0)], @p1 = 14 [Type: Int32 (0)]    

The first SQL instructions seem to be going as expected, but then there’s an extra update instruction right at the end which, at first sight, makes no sense: we’re updating the Post table’s BlogId field to the ID of the inserted post and this isn’t really needed.

If we change the mappings by setting the inverse flag on the Blog’s Posts mapping, then the results are completely different:

public class BlogMapping:ClassMap<Blog> {
    public BlogMapping() {
        Table("Blog");
        Not.LazyLoad();

        Id(b => b.BlogId)
            .GeneratedBy.Identity()
            .Default(0);
        Map(b => b.Description);
        Version(b => b.Version);
        HasMany(b => b.Posts)
            .Access.CamelCaseField(Prefix.Underscore)
            .AsBag()
            .Cascade.AllDeleteOrphan()
            .Not.LazyLoad()
            .KeyColumn("BlogId")
            .Inverse();
    }
}

And here’s NH’s generated SQL for the previous insertion:

INSERT INTO Blog (Version, Description) VALUES (@p0, @p1);@p0 = 1 [Type: Int32 (0)], @p1 = ‘Testing blog’ [Type: String (0)]    
select @@IDENTITY    
INSERT INTO Post (Description, BlogId) VALUES (@p0, @p1);@p0 = ‘Post 1′ [Type: String (0)], @p1 = 5 [Type: Int32 (0)]    
select @@IDENTITY    

Oh yes, now we’re in business! But what’s going on here? What does Inverse do? 

IMO, the main problem in getting the inverse attribute is it’s “negative” meaning. If we set inverse to true, we’re saying that this entity is not responsible for maintaining the relationship. On the other hand, setting inverse to false does mean that that entity is the one responsible for managing the relationship. So, when we added the Inverse call to the Blog side, we’re saying that the Blog entity isn’t responsible for managing its relationship with Post (at the database level). In this case, managing the relationship should be done on the Post’s side. This is a little bit counterintuitive, but if we take a step back and recall how things work at the database level, it might make sense.

In a database, each relationship is represented by a foreign key  on the many side (going back to our RDE, the Post table will have a foreign key – BlogId – which references the primary key of the Blog table). Taking this into consideration does help understand the invert attribute: we can insert entries in the Blog table without caring about its associated Posts, but we can’t really add a Post without setting its BlogId foreign key. And that’s why the inverse attribute we’ve set in the Blog side of the mappings did eliminate the superfluous UPDATE call: with it, Blog is no longer responsible for making sure that the foreign key of each post is correctly set up (that will be taken care of by Post).

It goes without saying that in order for the persistence of the Blog aggregate to work, Post must have all the required info so that it doesn’t violate the foreign key rule when it persists itself to the database. And in fact, my initial code already does all the necessary setup. Notice that adding a Post to a Blog will always set its Blog field to its parent (the owning Blog) and that that field is defined as a reference to the Blog table on the mapping file.

Before ending, one extra note: did you notice that Cascade.AllDeleteOrphan() call in the mappings? Well, that’s really important when you want to remove a Post that has previously been persisted from the database. Here’s some code that will help you understand what I’m saying:

private static void UpdateBlog() {
    var sessionFactory = SessionFactory.CreateSessionFactory();
    using (var session = sessionFactory.OpenSession()) {
        using (var tran = session.BeginTransaction()) {
            var blog = session.Get<Blog>(2);
                    
            blog.RemoveAll();
            var newPost = new Post {Description = "Post 2"};
            blog.AddPost(newPost);

            tran.Commit();
        }
    }
}

After loading a previously saved Blog, I’m removing all its posts and adding a new one. Here’s the log for the generated SQL when you don’t delete orphan’s entities:

SELECT blog0_.BlogId as BlogId0_0_, blog0_.Version as Version0_0_, blog0_.Description as Descript3_0_0_ FROM Blog blog0_ WHERE blog0_.BlogId=@p0;@p0 = 2 [Type: Int32 (0)]    
SELECT posts0_.BlogId as BlogId1_, posts0_.PostId as PostId1_, posts0_.PostId as PostId1_0_, posts0_.Description as Descript2_1_0_, posts0_.BlogId as BlogId1_0_ FROM Post posts0_ WHERE posts0_.BlogId=@p0;@p0 = 2 [Type: Int32 (0)]    
INSERT INTO Post (Description, BlogId) VALUES (@p0, @p1);@p0 = ‘Post 2′ [Type: String (0)], @p1 = 2 [Type: Int32 (0)]    
select @@IDENTITY    
UPDATE Blog SET Version = @p0, Description = @p1 WHERE BlogId = @p2 AND Version = @p3;@p0 = 5 [Type: Int32 (0)], @p1 = ‘Testing blog’ [Type: String (0)], @p2 = 2 [Type: Int32 (0)], @p3 = 4 [Type: Int32 (0)]    
UPDATE Post SET Description = @p0, BlogId = @p1 WHERE PostId = @p2;@p0 = ‘Post 2′ [Type: String (0)], @p1 = NULL [Type: Int32 (0)], @p2 = 12 [Type: Int32 (0)]    

As you can see, the last SQL instruction is trying to set Post table’s foreign key to null and that will not end well. Setting the delete orphan entities attribute does ensure that NH will generate a DELETE instead of an UPDATE. Here’s the SQL generated when you add the delete orphan entities mapping:

SELECT blog0_.BlogId as BlogId0_0_, blog0_.Version as Version0_0_, blog0_.Description as Descript3_0_0_ FROM Blog blog0_ WHERE blog0_.BlogId=@p0;@p0 = 2 [Type: Int32 (0)]    
SELECT posts0_.BlogId as BlogId1_, posts0_.PostId as PostId1_, posts0_.PostId as PostId1_0_, posts0_.Description as Descript2_1_0_, posts0_.BlogId as BlogId1_0_ FROM Post posts0_ WHERE posts0_.BlogId=@p0;@p0 = 2 [Type: Int32 (0)]    
INSERT INTO Post (Description, BlogId) VALUES (@p0, @p1);@p0 = ‘Post 2′ [Type: String (0)], @p1 = 2 [Type: Int32 (0)]    
select @@IDENTITY    
UPDATE Blog SET Version = @p0, Description = @p1 WHERE BlogId = @p2 AND Version = @p3;@p0 = 7 [Type: Int32 (0)], @p1 = ‘Testing blog’ [Type: String (0)], @p2 = 2 [Type: Int32 (0)], @p3 = 6 [Type: Int32 (0)]    
DELETE FROM Post WHERE PostId = @p0;@p0 = 18 [Type: Int32 (0)]    

There’s still more to say about inverse (for instance, we haven’t discussed many-to-many relationships), but I believe that this is more than enough for getting you started.

That’s it for now. Stay tuned for more!

Author: "Luis Abreu"
Send by mail Print  Save  Delicious 
Date: Tuesday, 22 Jan 2013 05:13

What’s the main difference between:

Restrictions.Dysjunction()

and

Restrictions.Conjunction() ?

Answer:

Dysjunction pertains to OR while Conjunction pertains to AND when translated in SQL language.

Author: "Jen Bagorio"
Send by mail Print  Save  Delicious 
Date: Sunday, 20 Jan 2013 01:24

When NHibernate with Fluent NHibernate work, it’s a beautiful thing.  They almost make building applications that use a relational database painless.  However, sometimes you run into something that costs a couple of developers the better part of a day and it makes you want to scream.  The team I’m on had one of those days as a result of trying to refactor our database to fit better with our business instead of NHibernate.

Here’s a portion of our original database structure:

diagram_old
The changes we wanted to make fell into two categories. First, we wanted to rename some tables to better align with the business vocabulary.  For example, the table ProductMedia would become MediaPool because that’s what marketing called it.  The second change turned out to be more problematic.

Houston, We Have a Problem

Some of our entities actually represent the relationship between two other entities.  For example, we have a SiteProduct that represents a Product on a Site.  When we first designed the database, we setup the relationship entites with a single field surrogate key: A unique, automatically generated ID that only exists to serve the database.  The problem was it had no meaning to the business so our screens tended to know the SiteId and the ProductId but not the SiteProductId.  This forced us to constantly join or lookup the SiteProduct table to get the SiteProductId so we could get to the data we needed to perform work.  We wanted to eliminate those surrogate keys and use the combination of SiteId and ProductId as a composite key instead.  We had similar circumstances in a couple of other relationship tables that we also wanted to improve.  We ended up with this:

diagram_new

After reworking our entities and Fluent NHibernate maps to mirror our new structure, we ran into the following rather cryptic exception when running our unit test suite against SQLLite:

System.ArgumentOutOfRangeException : Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index

The problem was maps like this one:

public class MediaProductPresentationMap : ClassMap
{
    public MediaProductPresentationMap()
    {
        Table("MediaProductPresentations");
        Id(x => x.Id, "MediaProductPresentationId").GeneratedBy.Identity().UnsavedValue(0);
        Map(x => x.SequenceNumber).Not.Nullable();
        References(x => x.Presentation, "ProductPresentationId").Not.Nullable().Cascade.None();
        References(x => x.MediaInPool).Columns("MediaId", "ProductId").Not.Nullable().Cascade.None();
        References(x => x.SiteProduct).Columns("SiteId", "ProductId").Not.Nullable().Cascade.None();
    }
}

Notice that ProductId is one component of the composite key for MediaInPool and one component of the composite key for SiteProduct.  As it turns out, NHibernate simply cannot deal with one field being mapped twice.  The only work around mentioned anywhere is to make at least one of the references read only.  Unfortunately, that doesn’t work in this case because each of the keys consists of two fields.  If we make the reference to MediaInPool read only, the MediaId does not get set and the insert fails;  If we make the SiteProduct reference read only, the insert fails with a null SiteId.

A couple members of the team looked for solutions all afternoon.  I got involved as well and we just could not find a good answer.  After dinner, I sat down and examined the issue one more time and came up with a bit of a hack to solve the problem.  The application needed NHibernate to set all the ID fields when inserting a new MediaProductPresentation.  It also needed to be able to traverse the references when reading.  However, since the MediaInPool and SiteProduct already exist before the application tries to add a MediaProductPresentation, the references could be read only as long as there was another way to tell NHibernate to update those ID fields.

The Solution

The solution required changes to both the entity and the map.  First the entity:

public class MediaProductPresentation
{
    public virtual string Id { get; set; }
    public virtual SiteProduct SiteProduct { get; set; }
    public virtual ProductPresentation Presentation { get; set; }
    public virtual MediaInPool MediaInPool { get; set; }
    public virtual int SequenceNumber { get; set; }
    public virtual int MediaId { get { return MediaInPool.Media.Id; } protected set { } }

    /// These two properties exist solely to support NH persistence. On a save, these are the ones that are actually persisted in the NH map.
    /// They should not ever need to be exposed to other classes.
    protected internal virtual int SiteId { get { return SiteProduct.SiteId; } protected set { } }
    protected internal virtual int ProductId { get { return SiteProduct.ProductId; } protected set { } }

    protected MediaProductPresentation() {}

    public MediaProductPresentation(SiteProduct siteProduct, ProductPresentation presentation, MediaInPool mediaInPool, int sequenceNumber)
    {
        SiteProduct = siteProduct;
        Presentation = presentation;
        MediaInPool = mediaInPool;
        SequenceNumber = sequenceNumber;
    }
}

The entity has both objects to represent the references for the read case (e.g. SiteProduct) and ID fields for the references to support the write case (e.g. SiteId, ProductId). The ID fields use a clever pattern taught to me by one of my colleagues, Tim Coonfield. Although NHibernate can see them, other classes in the application cannot.

With the entity setup correctly, the map is easy though it looks a little strange:

public class MediaProductPresentationMap : ClassMap
{
    public MediaProductPresentationMap()
    {
        Table("MediaProductPresentations");
        Id(x => x.Id, "MediaProductPresentationId").GeneratedBy.Identity().UnsavedValue(0);
        Map(x => x.SequenceNumber).Not.Nullable();
        Map(x => x.ProductId).Not.Nullable();
        Map(x => x.MediaId).Not.Nullable();
        Map(x => x.SiteId).Not.Nullable();
        References(x => x.Presentation, "ProductPresentationId").Not.Nullable().Cascade.None();
        References(x => x.MediaInPool).Columns("MediaId", "ProductId").Not.Nullable().Cascade.None().Not.Insert().Not.Update().ReadOnly();
        References(x => x.SiteProduct).Columns("SiteId", "ProductId").Not.Nullable().Cascade.None().Not.Insert().Not.Update().ReadOnly();
    }
}

The map tells NHibernate to ignore the object references when inserting or updating. Instead, NHibernate sets the various foreign key reference IDs directly.

It May Not be Pretty, But It Works

It’s a shame that NHibernate is not smart enough to reference multiple tables that share common elements in their composite keys. As our data architect put it, this is why almost everybody that uses NHibernate sticks to single field keys and uses surrogate keys on relationship tables. This work around makes it possible to use composite keys where they make sense without fear. I know it’s not exactly beautiful, but, at least for us, it was a small price to pay to have the database structure align better with the business.

Author: "Tom Cabanski"
Send by mail Print  Save  Delicious 
Date: Monday, 14 Jan 2013 11:42

Our Client is looking for a C#.Net developer with experience of WCF / Winforms and VB. The customer is an International leader in the Telecommunications sector.

Technical requirements

Good .NET knowledge (3.5 / 4) – C# & VB.NET

  • WPF & winforms
  • Experience with ORM (NHibernate)
  • Good knowledge of SQL server (T-SQL)
  • Mutlithreading (TPL)
  • Algorithm complexity
  • WCF-knowledge is a plus.

Soft skills 

  • Able to work alone or in a small team, take a project end to end (from technical analysis to implementation and support).
  • Result driven
  • Ability to mingle in discussion (challenge requirement, functional aspect and architecture when relevant)
  • Telecom knowledge (voice business) is a plus.
Author: "itpselect"
Send by mail Print  Save  Delicious 
Date: Friday, 04 Jan 2013 21:58

Para poder ejecutar una sentencia SQL desde una sesión de NHibernate, es necesario crear una sesión e iniciar una transacción:       

using (ISession session = sessionFactory.OpenSession())
using (ITransaction transaction = session.BeginTransaction())
{
}

A continuación creamos un objeto de tipo IDBCommand a través de la conexión asociada a nuestra sesión y lo asociamos a la transacción activa:

using (ISession session = sessionFactory.OpenSession())
using (ITransaction transaction = session.BeginTransaction())
{
    var dbCommand = session.Connection.CreateCommand();
    session.Transaction.Enlist(dbCommand);
}

Finalmente, ejecutamos la sentencia:

using (ISession session = sessionFactory.OpenSession())
using (ITransaction transaction = session.BeginTransaction())
{
    var dbCommand = session.Connection.CreateCommand();
    session.Transaction.Enlist(dbCommand);
    dbCommand.CommandText = "update SomeTable set SomeField = null";
    dbCommand.ExecuteNonQuery();
    transaction.Commit();
}
Author: "lenshermx"
Send by mail Print  Save  Delicious 
Date: Wednesday, 19 Dec 2012 22:02

It wouldn’t surprise me if nobody noticed, but I overhauled my NHibernate articles. They are now grouped together in one convenient location.

Maybe when I have time, I will write about all the things I was doing wrong and how I fixed them.

Author: "Dave"
Send by mail Print  Save  Delicious 
Date: Wednesday, 05 Dec 2012 11:48

Position: Lead .Net Developer

Location: Brussels, Belgium

Duration: 6+ Months contract

Function Description

Client is actively looking for a .NET Lead Developer to integrate one of its IT project team. The candidate will work under the supervision of the technical architect and a project manager, he or she will implement the n-tier solution. Following client’s guidelines and market best practices the candidate will produce quality code, with unit-testing, code coverage and documentation. The candidate will be a key player in the organization of the development tasks: Agile and Scrum methodology is used. The candidate ideally the candidate must have relevant experience in corporate-wide projects and an experience with .net 3.5/4.0.

Responsibilities

  • Organize tasks and planning of development team (source code writing, unit testing,…) in collaboration with Technical Architect and Project Leader ( Agile/Scrum)
  • Ensure development team produce quality code and documentation
  • Develop actively
  • Cooperate with Technical Architect and Software Engineer in implementation of the Technical Architecture
  • Contribute to the documents describing the technical architecture.
  • Produce communicate realistic predictions about the development work.
  • Produce Component Design for business components
  • Implement System Design
  • Implement business components & technical components
  • Respect guidelines set up by client’s software engineers
  • Contribute to the methodologies & guidelines defined with software engineers.
  • Coordination of deployment with ICT Operation teams

Communication Skills

  • Ability to communicate appropriately with developers
  • Ability to communicate with software engineer and other technical architects to understand the constraints of the architecture that must be followed
  • Ability to communicate with business analysts for understanding the specifications and designs that form the basis for implementation
  • Team player
  • Ideally bilingual (FR/NL) but an active knowledge of one of the language and a passive knowledge of the other will suffice.
  • A good understanding of written English is necessary because all the documentation is written in English

Personal Skills

  • Being Stress-Resistant.
  • Have a technology oriented drive and be passionate about it.
  • Ability to work relatively independently following the priorities and timing of the project plan.
  • Ability to work structured following the procedures of the project (check-in, check-out, time sheet reporting, and punctuality).
  • Ability to work according to rules, standards and guidelines defined in the architecture.
  • Ability to communicate realistic predictions about the development work.
  • Being prepared to commute daily to Brussels for the whole duration of the mission.

 

Technical Skills

  • Sound knowledge of (at least 2) of these technologies is a must : WPF, WCF, LinQ, NHibernate, ASP.NET, MOSS
  • Capability to develop based on UML specifications
  • The ability to develop code in a structured way according to the coding conventions and rules a set out in the architecture
  • The ability to document code in a concise manner
  • The ability to read formal designs and specifications
  • Fluency in the use of the following tools:
  • Visual Studio .NET 2010
  • TFS 2010
  • Development related tools (programming languages, integrated development environments, testing, build, …)
  • Platform related tools (database, middleware, application servers, deployment,…)
  • Documentation tools (Microsoft Office or equivalent)
  • Modeling tools (UML based CASE tools)
  • Change and configuration management tools

Experience & expertise

  • Sound knowledge of OO & distributed development
  • Sound knowledge of the C# programming language.
  • Sound knowledge of WCF and WPF.
  • Sound knowledge of relational databases, database design, SQL, stored procedures (Oracle and/or SQL Server).
  • Sound knowledge of the technical platform on which the project has to be implemented including aspects such as the hardware, the development environment (.Net 3.5/4.0), the programming languages, the middleware, the database (Oracle), …
  • Experience with XML (XSD, XSLT,…).
  • Sound knowledge of the component based development principles as explained in CBD concepts.
  • Sound knowledge of generally applicable implementation patterns.
  • Typically these people have at least 3-5 years of development experience with the programming language and technical platform on which the system is developed.
  • Sound Knowledge TFS
  • Sound Knowledge Scrum- and agile methodology

Kindly send your CVs to eugene.kujur@stefanini.com along with expected daily rates.

Author: "Eugene Kujur"
Send by mail Print  Save  Delicious 
Date: Friday, 19 Oct 2012 18:01

NHibernate is an object-relational mapping (ORM) solution for the Microsoft .NET platform.

NHibernate is a mature, open source object-relational mapper for the .NET framework.

Vikas Jindal

Author: "starsworlds"
Send by mail Print  Save  Delicious 
Date: Friday, 19 Oct 2012 18:01

NHibernate is an object-relational mapping (ORM) solution for the Microsoft .NET platform.

NHibernate is a mature, open source object-relational mapper for the .NET framework.

Vikas Jindal

Author: "Vikas Jindal"
Send by mail Print  Save  Delicious 
Date: Friday, 19 Oct 2012 09:33
Update: Please take a look here before taking this article too seriously. I still think it includes
Author: "Adrian Bontea"
Send by mail Print  Save  Delicious 
Date: Saturday, 13 Oct 2012 03:20

To set the scene for the code, here is a diagram of the database.

Database

The relationships in the above diagram are as follows:

  • One to Many (i.e. A country has many clubs but a club belongs to only one country)
  • One to One (i.e. A club has one club detail and a club detail extends only one club)
  • Many to Many (i.e. A club has many sponsors and a sponsor works with many clubs)

I have highlighted the code below that define the relationships.

using System.Collections;
using System.Collections.Generic;
using NHibernate.Mapping.ByCode;
using NHibernate.Mapping.ByCode.Conformist;

namespace Learning.NHibernate
{
    public class Club
    {
        public virtual int Id { get; set; }
        public virtual string Name { get; set; }

        public virtual Country Country { get; set; }
        public virtual ClubDetail ClubDetail { get; set; }
        public virtual ICollection<Sponsor> Sponsors { get; set; }
    }
       
    public class ClubMapping : ClassMapping<Club>
    {
        public ClubMapping()
        {
            Schema("dbo");
            Table("Club");

            Id(i => i.Id);            
            Property(x => x.Name);

            ManyToOne(x => x.Country, x =>
            {
                x.Column("CountryId");
            });

            OneToOne(x => x.ClubDetail, mapper =>
            {
                mapper.PropertyReference(typeof(ClubDetail).GetProperty("Id"));
            });

            Set(x => x.Sponsors, collectionMapping =>
            {
                collectionMapping.Table("ClubSponsor");
                collectionMapping.Key(map => map.Column("ClubId"));                  
            },
            map => map.ManyToMany(p => p.Column("SponsorId")));
        }
    }
}
using System.Collections;
using System.Collections.Generic;
using NHibernate.Mapping.ByCode;
using NHibernate.Mapping.ByCode.Conformist;

namespace Learning.NHibernate
{
    public class Country
    {
        public virtual int Id { get; set; }
        public virtual string AbbreviatedName { get; set; }  
        public virtual string Name { get; set; }
        public virtual ICollection<Club> Clubs { get; set; }
    }
       
    public class CountryMapping : ClassMapping<Country>
    {
        public CountryMapping()
        {
            Schema("dbo");
            Table("Country");

            Id(i => i.Id);

            Property(x => x.AbbreviatedName);         
            Property(x => x.Name);

            Set(x => x.Clubs, mapping =>
            {
                mapping.Key(k =>
                {
                    k.Column("CountryId");
                });
                mapping.Inverse(true);                
            },
            r => r.OneToMany());
        }
    }
}

using System.Collections;
using System.Collections.Generic;
using NHibernate.Mapping.ByCode;
using NHibernate.Mapping.ByCode.Conformist;

namespace Learning.NHibernate
{
    public class ClubDetail
    {
        public virtual int Id { get; set; }
        public virtual string AbbreviatedName { get; set; }
        public virtual string WebsiteURL { get; set; }        
        public virtual Club Club { get; set; }

        public override bool Equals(object obj)
        {
            if (ReferenceEquals(null, obj))
                return false;

            if (ReferenceEquals(this, obj))
                return true;

            var a = obj as ClubDetail;
            if (a == null)
                return false;

            return a.Club.Id == Club.Id;
        }

        public override int GetHashCode()
        {
            unchecked
            {
                var hash = 21;
                hash = hash * 37 + Club.Id.GetHashCode();

                return hash;
            }
        }
    }
       
    public class ClubDetailMapping : ClassMapping<ClubDetail>
    {
        public ClubDetailMapping()
        {
            Schema("dbo");
            Table("ClubDetail");

            ComposedId(i => i.ManyToOne(x => x.Club, mapper =>
            {
                mapper.Column("Id");
                mapper.ForeignKey("FK_ClubDetail_Club");
            }));

            Property(x => x.Id);
            Property(x => x.AbbreviatedName);
            Property(x => x.WebsiteURL);         
        }
    }
}

using System.Collections;
using System.Collections.Generic;
using NHibernate.Mapping.ByCode;
using NHibernate.Mapping.ByCode.Conformist;

namespace Learning.NHibernate
{
    public class Sponsor
    {
        public virtual int Id { get; set; }
        public virtual string Name { get; set; }
        public virtual ICollection<Club> Clubs { get; set; }
    }

    public class SponsorMapping : ClassMapping<Sponsor>
    {
        public SponsorMapping()
        {
            Schema("dbo");
            Table("Sponsor");

            Id(i => i.Id);

            Property(x => x.Name);

            Set(x => x.Clubs, collectionMapping =>
            {
                collectionMapping.Table("ClubSponsor");
                collectionMapping.Key(map => map.Column("SponsorId"));
            },
            map => map.ManyToMany(p => p.Column("ClubId")));
        }
    }
}

Author: "pwee167"
Send by mail Print  Save  Delicious 
Date: Wednesday, 03 Oct 2012 19:39

Recently, I have been heavily using QueryOver to deliver data from my database to my view models. When our team first decided to predominantly use QueryOver, I did a bit of research because I wanted to make sure that I understood the differences between using QueryOver and LINQ to NHibernate, and the differences between using those querying methods over using basic NHibernate Criteria queries. I wanted to share with you guys what I learned so hopefully I can save you guys some time and research. So without further ado, let’s take a quick look at why it is important to use a data layer to transfer data from your database to your application layer, and the differences between: LINQ to NHibernate, QueryOver, and basic criteria queries.

First of all, let’s talk about data layers and ORMs. Why are they important? Why shouldn’t you just pass data straight from the database to your UI? Well, security sure plays a factor, but another notable factor is layer independence also called layer separation. Layer independence, gives your data layer the ability to coexist within different UI applications and also creates a security disconnect between the application and your data.

Let’s take a look at this diagram that I found on infoq.com:

This diagram represents a properly layered application. Though this looks like a java application, we can still take notice to the disconnect between the database and the application layers. Also, adding a service layer to your application architecture can also strengthen the security and integrity of your data, by narrowing the access paths.

Now that we understand the importance of the data layer, let’s assume that we have chosen to use NHibernate as our ORM of choice. It is a free and very powerful ORM, which also incorporates an information retrieval library (aka. search engine) by using NHibernate Search which employs Lucene.Net. NHibernate does have a learning curve but, it’s getting easier to use with each upgrade, and it has quite the following in the open source community.

So now that we are one of the cool kids using NHibernate, our application needs to be able to query the database. Let’s assume that we have already mapped all the tables we need to objects with their proper relationships. The next question we must ask is, “How do I query my data using NHibernate’s API?” There are seven different ways to write queries using NHibernate but I want to focus on 3 main ways that people query using NHibernate:

  • Criteria
  • LINQ to NHibernate
  • QueryOver

Criteria queries are object oriented queries that are a good solution for dynamic querying. It solves HQL’s problem of using strings and concatenation to query the database. While Criteria queries still use strings, you can extend criteria using Projections to allow for a more strongly typed query. I don’t like this choice mainly because it’s not pretty and it still uses string names for properties.

Next we have, LINQ to NHibernate. LINQ is another object oriented query API that strengthens the readability of a query. It also uses the LINQ syntax.  The problem with this API is that it does not support Left Joins, subqueries, and many other different things that you might need to properly write complex queries. I will be honest; when I first saw that I could use LINQ to query using NHibernate I was ecstatic, but once I began using it to try to write some of the more complex queries, I found that the API for LINQ just fell flat when compared to QueryOver.

Finally, we get to QueryOver. QueryOver was written on top of the Criteria API so that you can use Criteria within QueryOver. The difference with QueryOver is that you use lambda expressions instead of strings to ensure that your query is strongly typed. Criteria’s Projections and Restrictions extensions are also compatible for use inside of QueryOver queries. QueryOver supports generics, and has a ‘TransformUsing ’ method that allows you to get only the properties you need from the query, which also helps with bottlenecking. This in turn makes it easy to transform your data set to fit the mold of a view model.

QueryOver is by far the most powerful of NHibernate’s querying APIs supporting subqueries and most join types, including: inner, outer, and full joins, while also overhauling readability when compared to Criteria queries.

If you are using NHibernate and needing to write some complex queries I would go with QueryOver. It is the easiest to read and gives you the most power.

That’s all I have for this post, if anyone has any questions please leave me a comment.

Author: "jaredmahan"
Send by mail Print  Save  Delicious 
Date: Friday, 28 Sep 2012 15:39

In the summer I needed to write a Windows Service that used NHibernate. It’s purpose was to scan 200,000 pension schemes and check which ones had not responded to our communications. It would read a large number of records and then one by one update them in various ways dependent on their state. It was long running service that came alive once a day. Consequently, I envisaged two potential problems:

1. Dirty Reads – I did not want NHibernate to give me first level cached objects after the first pass and not go to the database on subsequent passes.
2. Memory Leaks – As the service would be running for a long time I did not want to not repeatedly load lots of objects into the session and not dispose of them.

To solve these problems I wrote a Transaction Manager. This class used a Delegate wrapper pattern to:

Open the NHibernate Session.
Begin the NHibernate transaction.
Commit the Transaction.
Dispose of the Transaction.
Dispose of the Session.

Furthermore, as the TryManage() method returns a bool you can add your own meaningful exception in the calling class (but still log the original problem with the commit). As my manager only disposes and creates the Session object (and not the entire SessionFactory) it is also very performant. Finally, I think this class introduces a nice uniformity to your code whilst also reducing duplication.

The Transaction Manager

Below is the code for Manager. Note I am using Castle Windsor Nhibernate Integration to inject Castle’s ISessionManager. You will see that I am also using Log4net to log the exceptions.

  public class TransactionManager : ITransactionManager
    {
        private static readonly ILog Logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);
        private readonly ISessionManager sessionManager;
 
        public TransactionManager(ISessionManager sessionManager)
        {
            this.sessionManager = sessionManager;
        }
 
        public bool TryManage(Action action)
        {
            try
            {
                using (var session = sessionManager.OpenSession())
                using (var transaction = session.BeginTransaction())
                {
                        action();
                        transaction.Commit();
                }
                return true;
            }
            catch (Exception ex)
            {
                Logger.Error(ex);
                return false;
            }
        }
    }
}

Using the Transaction Manager

Below is an example of how I used it in the service.

...
if (!transactionManager.TryManage(() =>
      {
        var rpStatus = rPStatusRepository.Get(localRpId);
        if (SchemeHasMitigatingCircumstances(rpStatus))
           {                                               
             //do one update
           }
           else
           {
           // do another update
           }
           
           rPStatusRepository.SaveOrUpdate(rpStatus);
      }))
      {
      throw new ApplicationException(string.Format("Attempted to process RPStatusId {0} in SomeMethod() but the transaction failed.", rpId));
      }
 
Author: "codenamesean"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader