• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Wednesday, 10 Apr 2013 14:04

While developing with Windows Azure SDK and local Azure development fabric, when things go wrong, they go really wrong.

It could something as obscure as leaving an empty element somewhere in Web.config or a certificate issue.

The problem is, that before Visual Studio attaches a debugger into a IIS worker process, a lot of things can go wrong. What you get, it this:

There's no Event Log entry, nothing in a typical dev fabric temp folder (always, check 'C:\Users\ \AppData\Local\Temp\Visual Studio Web Debugger.log' first), nada.

Poking deeper, what you need to do is allow IIS to respond properly. By default IIS only displays complete error details for local addresses. So, to get a detailed report you need to use a local address.

You can get a local address by fiddling with the site binding with IIS Manager and changing what Azure SDK does, so:

  • First, start IIS Management Console.
  • Then right click on your deployment(*).* and select Edit Bindings.
  • Select All Unassigned

If you hit your web / service role using the new local address (could be something like http://127.0.0.1:82), you are most likely getting the full disclosure, like this:

In this case, a service was dispatched into dev fabric, which had an empty element somewhere in Web.config. The web role was failing before Visual Studio could attach a debugger, and only HTTP 500 was returned through normal means of communication.

Author: "Matevz Gacnik" Tags: ".NET 4.0 - WCF, Windows Azure"
Comments Send by mail Print  Save  Delicious 
Date: Sunday, 31 Mar 2013 20:40

There's a notion of Windows Guest OS versions in Windows Azure. Guest OS versions can actually be (in Q1 2012) either a stripped down version of Windows Server 2008 or a similiar version of Windows Server 2008 R2.

You can upgrade your guest OS in Windows Azure Management Portal:

Not that it makes much difference, especially while developing .NET solutions, but I like to be on the newest OS version all the time.

The problem is that the defaults are stale. In 1.6 version of the Windows Azure SDK, the default templates all specify the following:


xmlns="
http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"
osFamily="1"
osVersion="*">

The osFamily attribute defines OS version, with 1 being Windows Server 2008 and 2 being Windows Server 2008 R2. If you omit the osFamily attribute, the default is 1 too! Actually this attribute should probably move to the Role element, since it defines the version of the role's guest OS.


   xmlns="
http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"
  
osFamily="1"
   osVersion="*">
 
   
    

>




>

It doesn't make sense to have it normalized over all roles. Also, this schema makes it impossible to leave it out in VM role instances, where it gets ignored.

The osVersion attribute defines the version which should be deployed for your guest OS. The format is * or WA-GUEST-OS-M.m_YYYYMM-nn. You should never use the latter one. Asterisk, normally, means 'please upgrade all my instances automatically'. Asterisk is your friend.

If you want/need Windows Server 2008 R2, change it in your service configuration XML.

What this means is, that even if you publish and upgrade your guest OS version in the Azure Management Portal, you will get reverted the next time you update your app from within Visual Studio.

Author: "Matevz Gacnik" Tags: "Windows Azure"
Comments Send by mail Print  Save  Delicious 
Date: Monday, 26 Mar 2012 09:36

There are a couple of Windows Azure management tools, scripts and PowerShell commandlets available, but I find Windows Azure Platform Management Tool (MMC snap-in) one of the easiest to install and use for different Windows Azure subscriptions.

The problem is the tool has not been updated for almost a year and is this failing when you try to install it on the latest Windows Azure SDK (currently v1.6).

Here's the solution.

Author: "Matevz Gacnik" Tags: "Windows Azure"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 05 Oct 2011 09:17

This is the content from my Bleeding Edge 2011 talk on pub/sub broker design and implementation.

Due to constraints of the project (European Commission, funded by EU) I cannot publicly distribute the implementation code at this time. I plan to do it after the review process is done. It has been advised, that this probably won't be the case.

Specifically, this is:

  • A message based pub/sub broker
  • Can use typed messages
  • Can be extended
  • Can communicate with anyone
  • Supports push and pull models
  • Can call you back
  • Service based
  • Fast (in memory)
  • Is currently fed by the Twitter gardenhose stream, for your pleasure

Anyway, I can discuss the implementation and design decisions, so here's the PPT (in slovene only).

Downloads: Bleeding Edge 2011, PPTs
Demo front-end: Here

Author: "Matevz Gacnik" Tags: ".NET 4.0 - General, .NET 4.0 - WCF, Web ..."
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 07 Jun 2011 06:57

The following is a three day saga of empty 'Turn Windows Features on or off' dialog.

This dialog, as unimportant as it may seem, is the only orifice into Windows subsystem installations without having to cramp up command line msiexec.exe wizardry on obscure system installation folders that nobody wants to understand.

Empty, it looks like this:

First thing anyone should do when it comes to something obscure like this is:

  1. Reinstall the OS (kidding, but would help)
  2. In-place upgrade of the OS (kidding, but would help faster)
  3. Clean reboot (really, but most probably won't help)
  4. Run chkdsk /f and sfc /scannow (really)
  5. If that does not help, proceed below

If you still can't control your MSMQ or IIS installation, then you need to find out which of the servicing packages got corrupted somehow.

Servicing packages are Windows Update MSIs, located in hell under HKLM/Software/Microsoft/Windows/CurrentVersion/Component Based Servicing/Packages. I've got a couple thousand under there, so the only question is how to get to rough one out of there.

There's a tool, called System Update Readiness Tool [here] that nobody uses. Its side effect is that it checks peculiarities like this. Run it, then unleash notepad.exe on C:\Windows\Logs\CBS\CheckSUR.log and find something like this:

Checking Windows Servicing Packages

Checking Package Manifests and Catalogs
(f) CBS MUM Corrupt 0x800F0900 servicing\Packages\
Package_4_for_KB2446710~31bf3856ad364e35~amd64~~6.1.1.3.mum  Line 1:

(f) CBS Catalog Corrupt 0x800B0100 servicing\Packages\
Package_4_for_KB2446710~31bf3856ad364e35~amd64~~6.1.1.3.cat  

Then find the package in registry, take ownership of the node, set permissions so you can delete and delete it. Your OptionalFeatures.exe work again and it took only 10 minutes.

Author: "Matevz Gacnik" Tags: "Other, Windows 7, Work"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 04 May 2011 20:02

This is a slightly less technical post, covering my experiences and thought on cloud computing as a viable business processing platform.

Recent Amazon EC2 failure was gathering a considerable amount of press and discussion coverage. Mostly discussions revolve around the failure of cloud computing as a promise to never go down, never lose a bit of information.

This is wrong and has been wrong for a couple of years. Marketing people should not be making promises their technical engineers can't deliver. Actually, marketing should really step down from highly technical features and services, in general. I find it funny that there is no serious marketing involved in selling BWR reactors (which fail too), but they probably serve the same amount of people as do cloud services, nowadays.

Getting back to the topic, as you may know EC2 failed miserably a couple of weeks ago. It was something that should not happen - at least in many techie minds. The fault at AWS EC2 cloud was with their EBS storage system, which failed across multiple AWS availability zones within the same AWS region in North Virginia. Think of availability zones as server racks within the same data center and regions as different datacenters.

Companies like Foursquare, Tekpub, Quora and others all deployed their solutions to the same Amazon region - North Virginia and were thus susceptive to problems within that specific datacenter. They could have replicated across different AWS regions, but did not.

Thus, clouds will fail. It's only a matter of time. They will go down. The main thing clouds deliver is a lower probability of failure, not its elimination. Thinking that cloud computing will solve the industry's fears on losing data or deliver 100% uptime is downright imaginary.

Take a look at EC2's SLA. It says 99.95% availability. Microsoft's Azure SLA? 99.9%. That's almost 4.5 hours and almost 9 hours of downtime built in! And we didn't event start to discuss how much junk marketing people will sell.

We are still in IaaS world, although Microsoft is really pushing PaaS and SaaS hard. Having said that, Windows Azure's goal of 'forget about it, we will save you anyway' currently has a lot more merit that other offerings. It is indeed trying to go the PaaS and SaaS route while abstracting the physical machines, racks and datacenters.

Author: "Matevz Gacnik" Tags: "Architecture, Other"
Comments Send by mail Print  Save  Delicious 
Date: Sunday, 27 Mar 2011 16:54

I am more active on Twitter lately, finding it amusing personally.

Handle: http://twitter.com/matevzg

Author: "Matevz Gacnik" Tags: "Personal"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 09 Dec 2010 13:07

During exploration of high availability (HA) features of Windows Server AppFabric Distributed Cache I needed to generate enough load in a short timeframe. You know, to kill a couple of servers.

This is what came out of it.

It's a simple command line tool, allowing you to:

  • Add millions of objects of arbitrary size to the cache cluster (using cache.Add())
  • Put objects of arbitraty size to cache cluster
  • Get objects back
  • Remove objects from cache
  • Has cluster support
  • Has local cache support
  • Will list configuration
  • Will max out you local processors (using .NET 4 Parallel.For())
  • Will perform graceously, even in times of trouble

I talked about this at a recent Users Group meeting, doing a live demo of cache clusters under load.

Typical usage scenario is:

  1. Configure a HA cluster
    Remember, 3 nodes minimum, Windows Server 2008 (R2) Enterprise or DataCenter
  2. Configure a HA cache
  3. Edit App.config, list all available servers
  4. Connect to cluster
  5. Put a bunch of large objects (generate load)
    Since AppFabric currently supports only partitioned cache type, this will distribute load among all cluster hosts. Thus, all hosts will store 1/N percent of objects.
  6. Stop one node
  7. Get all objects back
    Since cache is in HA mode, you will get all your objects back, even though a host is down - cluster will redistribute all the missing cache regions to running nodes.

You can download the tool here.

Author: "Matevz Gacnik" Tags: ".NET 4.0 - General, Architecture, Micros..."
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 19 Aug 2010 20:31

One of the most important steps needed for computer science to get itself to the next level seems to be fading away.

P vs NP

Actually, its proof from is playing hard to catch again. This question (whether P=NP or P!=NP) does not want to be answered. It could be, that the problem of proving it is also NP-complete.

The (scientific) community wants needs closure. If P != NP would be proven, a lot of orthodox legislature in PKI, cryptography and signature/timestamp validity would probably become looser. If P=NP is true, well, s*!t hits the fan.

Author: "Matevz Gacnik" Tags: "Other"
Comments Send by mail Print  Save  Delicious 
Date: Saturday, 24 Oct 2009 11:50

Developer pureness is what we should all strive for.

This is not it:

Conflicting with this:

[c:\NetFx4\versiontester\bin\debug]corflags VersionTester.exe
Microsoft (R) .NET Framework CorFlags Conversion Tool.  Version  3.5.21022.8
Copyright (c) Microsoft Corporation.  All rights reserved.

Version   : v4.0.21006
CLR Header: 2.5
PE        : PE32
CorFlags  : 1
ILONLY    : 1
32BIT     : 0
Signed    : 0

Visual Studio must display version number of 4.0 instead of 4 in RTM. It needs to display major and minor number of the new framework coherently with 2.0, 3.0, 3.5.

Otherwise my compulsive disorder kicks in.

Author: "Matevz Gacnik" Tags: ".NET 4.0 - General"
Comments Send by mail Print  Save  Delicious 
Date: Saturday, 24 Oct 2009 11:39

I'm just poking around the new framework, dealing with corflags and CLR Header versions (more on that later), but what I found is the following.

Do a dir *.dll /o:-s in the framework directory. You get this on .NET FX 4.0 beta 2 box:

[c:\windows\microsoft.net\framework64\v4.0.21006]dir *.dll /o:-s

 Volume in drive C is 7  Serial number is 1ED5:8CA5
 Directory of  C:\Windows\Microsoft.NET\Framework64\v4.0.21006\*.dll

 7.10.2009 3:44   9.833.784  clr.dll
 7.10.2009 2:44   6.072.664  System.ServiceModel.dll
 7.10.2009 2:44   5.251.928  System.Windows.Forms.dll
 7.10.2009 6:04   5.088.584  System.Web.dll
 7.10.2009 5:31   5.086.024  System.Design.dll
 7.10.2009 3:44   4.927.808  mscorlib.dll
 7.10.2009 3:44   3.529.608  Microsoft.VB.Activities.Compiler.dll
 7.10.2009 2:44   3.501.376  System.dll
 7.10.2009 2:44   3.335.000  System.Data.Entity.dll
 7.10.2009 3:44   3.244.360  System.Data.dll
 7.10.2009 2:44   2.145.096  System.XML.dll
 7.10.2009 5:31   1.784.664  System.Web.Extensions.dll
 7.10.2009 2:44   1.711.488  System.Windows.Forms.DataVis.dll
 7.10.2009 5:31   1.697.128  System.Web.DataVis.dll
 7.10.2009 5:31   1.578.352  System.Workflow.ComponentModel.dll
 7.10.2009 3:44   1.540.928  clrjit.dll
 7.10.2009 3:44   1.511.752  mscordacwks.dll
 7.10.2009 3:44   1.454.400  mscordbi.dll
 7.10.2009 2:44   1.339.248  System.Activities.Presentation.dll
 7.10.2009 5:31   1.277.776  Microsoft.Build.dll
 7.10.2009 2:44   1.257.800  System.Core.dll
 7.10.2009 2:44   1.178.448  System.Activities.dll
 7.10.2009 5:31   1.071.464  System.Workflow.Activities.dll
 7.10.2009 5:31   1.041.256  Microsoft.Build.Tasks.v4.0.dll
 7.10.2009 2:44   1.026.920  System.Runtime.Serialization.dll

Funny how many engineer dollars are/were spent on middle tier technologies I love.

Proof: System.ServiceModel.dll is bigger then System.dll, System.Windows.Forms.dll, mscorlib.dll, and System.Data.dll.

We only bow to clr.dll, a new .NET 4.0 Common Language Runtime implementation.

Author: "Matevz Gacnik" Tags: ".NET 4.0 - General, .NET 4.0 - WCF"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 03 Jun 2009 13:23

Interoperability day is focused on, well, interoperability. Especially between major vendors in government space.

We talked about major issues in long term document preservation formats and the penetration of Office serialization in real world...

.. and the lack of support from legislature covering long term electronic document formats and their use.

Here's the PPT [Slovenian].

Author: "Matevz Gacnik" Tags: "Other, Work"
Comments Send by mail Print  Save  Delicious 
Date: Saturday, 16 May 2009 18:21

On Monday, 05/18/2009 I'm speaking at Document Interop Initiative in London, England.

Title of the talk is: High Fidelity Programmatic Access to Document Content, which will cover the following topics:

  • Importance of OOXML as a standards based format, especially in technical issues of long term storage for document content preservation
  • Importance of legal long term storage for document formats
  • Signature and timestamp benefits of long term XML formats
  • Performance and actual cost analysis of using publicly-parsable formats
  • Benefits of having a high fidelity programmatic access to document content, backed with standardization

The event is held at Microsoft Limited, Cardinal Place, 100 Victoria Street SW1E 5JL, London.

Update: Here's the presentation.

Author: "Matevz Gacnik" Tags: "Microsoft, Work"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 10 Apr 2009 21:06

There are a couple situations where one might use svcutil.exe in command prompt vs. Add Service Reference in Visual Studio.

At least a couple.

I don't know who (or why) made a decision not to support namespace-less proxy generation in  Visual Studio. It's a fact that if you make a service reference, you have to specify a nested namespace.

On the other end, there is an option to make the proxy class either public or internal, which enables one to hide it from the outside world.

That's a design flaw. There are numerous cases, where you would not want to create another [1] namespace, because you are designing an object model that needs to be clean.

[1] This is only limited to local type system namespace declarations - it does not matter what you are sending over the wire.

Consider the following namespace and class hierarchy that uses a service proxy (MyNamespace.Service class uses it):

Suppose that the Service class uses a WCF service, which synchronizes articles and comments with another system. If you use Add Web Reference, there is no way to hide the generated service namespace. One would probably define MyNamespace.MyServiceProxy as a namespace and declared a using statement in MyNamespace.MyService scope with:

using MyNamespace.MyServiceProxy;

This results in a dirty object model - your clients will see your internals and that shouldn't be an option.

Your class hierarchy now has another namespace, called MyNamespace.ServiceProxy, which is actually only used inside the MyNamespace.Service class.

What one can do is insist that the generated classes are marked internal, hiding the classes from the outside world, but still leaving an empty namespace visible for instantiating assemblies.

Leaving only the namespace visible, with no public classes in it is pure cowardness.

Not good.

Since there is no internal namespace modifier in .NET, you have no other way of hiding your namespace, even if you demand internal classes to be generated by the Add Service Reference dialog.

So, svcutil.exe to the rescue:

svcutil.exe <metadata endpoint>
   /namespace:*, MyNamespace
   /internal
   /out:ServiceProxyClass.cs

The /namespace option allows you to map between XML and CLR namespace declarations. The preceding examples maps all (therefore *) existing XML namespaces in service metadata to MyNamespace CLR namespace.

The example will generate a service proxy class, mark it internal and put it into the desired namespace.

Your object model will look like this:

Currently, this is the only way to hide the internals of service communication with a proxy and still being able to keep your object model clean.

Note that this is useful when you want to wrap an existing service proxy or hide a certain already supported service implementation detail in your object model. Since your client code does not need to have access to complete service proxy (or you want to enhance it), it shouldn't be a problem to hide it completely.

Considering options during proxy generation, per method access modifiers would be beneficial too.

Author: "Matevz Gacnik" Tags: ".NET 3.5 - WCF, Web Services"
Comments Send by mail Print  Save  Delicious 
Date: Monday, 06 Apr 2009 12:22

XmlSerializer is a great peace of technology. Combined with xsd.exe and friends (XmlSerializerNamespaces, et al.) it's a powerful tool for one to get around XML instance serialization/deserialization.

But, there is a potentially serious bug present, even in 3.5 SP1 version of the .NET Framework.

Suppose we have the following XML structure:

<Envelope xmlns="NamespaceA"
          xmlns:B="NamespaceB">
  <B:Header></B:Header>
  <Body></Body>
</Envelope>

This tells you that Envelope, and Body elements are in the same namespace (namely 'NamespaceA'), while Header is qualified with 'NamespaceB'.

Now suppose we need to programmatically insert <B:Header> element into an empty, core, document.

Core document:

<Envelope xmlns="NamespaceA"
          xmlns:B="NamespaceB">
  <Body></Body>
</Envelope>

Now do an XmlNode.InsertNode() of the following:

<B:Header>...</B:Header>

We should get:

<Envelope xmlns="NamespaceA"
          xmlns:B="NamespaceB">
  <B:Header>...</B:Header>
  <Body></Body>
</Envelope>

To get the to be inserted part one would serialize (using XmlSerializer) the following Header document:

<B: Header xmlns:B="NamespaceB">
  ...
</B:Header>

To do this, a simple XmlSerializer magic will do the trick:

XmlSerializerNamespaces xsn = new XmlSerializerNamespaces();
xsn.Add("B", "NamespaceB");

XmlSerializer xSer = new XmlSerializer(typeof(Header));
XmlTextWriter tw = new XmlTextWriter(ms, null);
xSer.Serialize(tw, h, xsn);

ms.Seek(0, SeekOrigin.Begin);

XmlDocument doc = new XmlDocument()
doc.Load(ms);
ms.Close();

This would generate exactly what we wanted. A prefixed namespace based XML document, with the B prefix bound to 'NamespaceB'.

Now, if we would import this document fragment into our core document using XmlNode.ImportNode(), we would get:

<Envelope xmlns="NamespaceA"
          xmlns:B="NamespaceB">
  <B:Header xmlns:B="NamespaceB">...</B:Header>
  <Body></Body>
</Envelope>

Which is valid and actually, from an XML Infoset view, an isomorphic document to the original. So what if it's got the same namespace declared twice, right?

Right - until you involve digital signatures. I have described a specific problem with ambient namespaces in length in this blog entry: XmlSerializer, Ambient XML Namespaces and Digital Signatures.

When importing a node from another context, XmlNode and friends do a resolve against all namespace declarations in scope. So, when importing such a header, we shouldn't get a duplicate namespace declaration.

The problem is, we don't get a duplicate namespace declaration, since XmlSerializer actually inserts a normal XML attribute into the Header element. That's why we seem to get another namespace declaration. It's actually not a declaration but a plain old attribute. It's even visible (in this case in XmlElement.Attributes), and it definitely shouldn't be there.

So if you hit this special case, remove all attributes before importing the node into your core document. Like this:

XmlSerializerNamespaces xsn = new XmlSerializerNamespaces();
xsn.Add("B", "NamespaceB");

XmlSerializer xSer = new XmlSerializer(typeof(Header));
XmlTextWriter tw = new XmlTextWriter(ms, null);
xSer.Serialize(tw, h, xsn);

ms.Seek(0, SeekOrigin.Begin);

XmlDocument doc = new XmlDocument()
doc.Load(ms);
ms.Close();
doc.DocumentElement.Attributes.RemoveAll();

Note that the serialized document representation will not change, since an ambient namespace declaration (B linked to 'NamespaceB') still exists in the XML Infoset of doc XML document.

Author: "Matevz Gacnik" Tags: ".NET 3.5 - General, XML"
Comments Send by mail Print  Save  Delicious 
DOW(n)   New window
Date: Thursday, 09 Oct 2008 20:29

DOW is down almost 700, again. And...

... I just have to post this. Especially when almost everything (besides devaluation of the safe currency) has already been done.

If you had bought $1,000.00 of Nortel stock one year ago, it would now be worth $49.00.

With
Enron , you would have $16.50 of the original $1,000.00.

With
MCI/Worldcom , you would have less than $5.00 left.

If you had bought $1,000.00 worth of
Miller Genuine Draft (the beer, not the stock) one year ago, drunk all the beer then turned in the cans for the 10-cent deposit, you would have $214.00.

Based on the above, 401KegPlan.com's current investment advice is to take that $5.00 you have left over and
drink lots and lots of beer and recycle.

Via: http://401kegplan.com/keg/

We are going over the edge. We as a world economy.

And yea, I opened a keg of Guinness.

Author: "Matevz Gacnik" Tags: "Personal"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 09 Oct 2008 19:42

I'm sorry it took a week, but here we go.

This is my exit content from Bleeding Edge 2008. I'm also posting complete conference contents, just in case.

Thanks go out to Dušan, Dejan, Miha, Miha and Miha.

Downloads:

Remark: PPT in Slovene only. Code international.

Thank you for attending. Hope to see you next year!

Author: "Matevz Gacnik" Tags: "CLR, Conferences, Microsoft, Web Service..."
Comments Send by mail Print  Save  Delicious 
Date: Friday, 29 Aug 2008 18:38

Recently I needed to specify exactly how I would like a specific class serialized. Suppose we have the following simple schema:

<xs:schema targetNamespace=" http://my.favourite.ns/person.xsd "
   elementFormDefault="qualified"
   xmlns:xs="
http://www.w3.org/2001/XMLSchema "
  
xmlns=" http://my.favourite.ns/person.xsd" >
  <xs:complexType name="Person">
    <xs:sequence>
      <xs:element name="Name" type="xs:string" />
      <xs:element name="Surname" type="xs:string" />
      <xs:element name="Age" type="xs:int" minOccurs="0" />
    </xs:sequence>
  </xs:complexType>
</xs:schema>

Let's call this schema President.xsd.

A valid XML instance against this schema would be, for example:

<Person xmlns="http://my.favourite.ns/person.xsd">
   <Name>Barrack</Name>
   <Surname>Obama</Surname>
   <Age>47</Age>
</Person>

Since we are serializing against a specific XML schema (XSD), we have an option of schema compilation:

xsd /c President.xsd

This, obviously, yields a programmatic type system result in a form of a C# class. All well and done.

Now.

If we serialize the filled up class instance back to XML, we get a valid XML instance. It's valid against President.xsd.

There is a case where your schema changes ever so slightly - read, the namespaces change, and you don't want to recompile the entire solution to support this, but you still want to use XML serialization. Who doesn't - what do you do?

Suppose we want to get the following back, when serializing:

<PresidentPerson xmlns=" http://schemas.gama-system.com/
      president.xsd"
>
   <Name>Barrack</Name>
   <Surname>Obama</Surname>
   <Age>47</Age>
</PresidentPerson>

There is an option to override the default serialization technique of XmlSerializer. Enter  the world of XmlAttributes and XmlAttributeOverrides:

private XmlSerializer GetOverridedSerializer()
{
   // set overrides for person element
   XmlAttributes attrsPerson = new XmlAttributes();
   XmlRootAttribute rootPerson =
      new XmlRootAttribute("PresidentPerson");
   rootPerson.Namespace = "
http://schemas.gama-system.com/
      president.xsd
";
   attrsPerson.XmlRoot = rootPerson;

   // create overrider
   XmlAttributeOverrides xOver = new XmlAttributeOverrides();
   xOver.Add(typeof(Person), attrsPerson);

   XmlSerializer xSer = new XmlSerializer(typeof(Person), xOver);
   return xSer;
}

Now serialize normally:

Stream ms = new MemoryStream();
XmlTextWriter tw = new XmlTextWriter(ms, null);
xSer.Serialize(tw, person);

This will work even if you only have a compiled version of your object graph, and you don't have any sources. System.Xml.Serialization.XmlAttributeOverrides class allows you to adorn any XML serializable class with your own XML syntax - element names, attribute names, namespaces and types.

Remember - you can override them all and still serialize your angle brackets.


Author: "Matevz Gacnik" Tags: "XML"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 17 Jul 2008 13:50

I'm happy to announce our organization work is coming to fruition.

Bleeding Edge 2008 is taking the stage for the autumn season.


Portorož, Slovenia, October 1st, 9:00

Official site, Registration, Sponsors

Go for the early bird registration (till September 12th). The time is now.

Potential sponsor? Here's the offering.

Author: "Matevz Gacnik" Tags: "Architecture, Conferences, MVP, Work"
Comments Send by mail Print  Save  Delicious 
Date: Saturday, 05 Jul 2008 12:18

One of our core products, Gama System eArchive was accredited last week.

This is the first accreditation of a domestic product and the first one covering long term electronic document storage in a SOA based system.

Every document stored inside the Gama System eArchive product is now legally legal. No questions asked.

Accreditation is done by a national body and represents the last step in a formal acknowledgement to holiness.

That means a lot to me, even more to our company.

The following blog entries were (in)directly inspired by the development of this product:

We've made a lot of effort to get this thing developed and accredited. The certificate is here.

This, this, this, this, this, this, this, this, this and those are direct approvals of our correct decisions.

Author: "Matevz Gacnik" Tags: ".NET 3.0 - General, .NET 3.0 - WCF, .NET..."
Comments Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader