• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)

Date: Saturday, 02 Aug 2014 13:00

Project failure is very real

Standish Group research shows ‘Of 3,555 projects from 2003 to 2012 that had labor costs of at least $10 million, only 6.4% were successful‘. 41% were total failures and the rest vastly over-budget or did not meet expectations.


In this article I document twenty-five years in the Software Development industry. Project failure is commonplace today; I hope to convince you that Technology Fragmentation is a key cause.

1989 -> 1999: One Project Failure – mostly PowerBuilder/ Oracle

From a summer engineer in 1989 to leading a project rescue in 1999 only one project failed. Seriously. (Only failure was with a Big 5 IT consultancy, staffed with mostly non-technical resources who did not want to be on that particular project).

All projects from 1994->1999 were PowerBuilder/ Oracle. Many people became experts in them and there were no additional frameworks. Virtually every project used similar architectural techniques.

During this period a small percentage of individuals appeared essential to some projects. At the very least some hands-on developers undoubtedly shaved months off the schedule. At least twice I saw several contractors almost certainly rescue a project; for the most part they did this by training/ leveraging existing staff, not super-human 80+ hour week coding.

1999->2001: Fought off Failures – Java + major Frameworks:

Technically I had only one outright failure with Java; when I opted not to burn-out and resigned from an insanely well paying contract. They burned through ~$10m in six months.

On my first two Java projects I averaged ~80 hours per week, exceeded 100 hours per week over one seven week period and even had some 24 hour days! I was the main technical resource, unfortunately hired late in their SDLCs. Developers lack of knowledge with the entire technology stack was a core issue. This required me to learn quickly and unsustainable hours to stabilize/ educate others.

On these successful projects several developers followed close behind – it was a team effort to succeed against the unknown technologies, but only between ~20->30% of the team in each case. A significant problem was the number of technologies to learn.

With PowerBuilder/ Oracle even the weakest team members were somewhat competent in one of the two technologies, and they generally soon improved (lots of other people to learn from). With Java and its increasing number of frameworks/ app-servers/ etc it was not uncommon for a project to only have one expert per framework/ tool. This meant several people became critical to project success. If they had bluffed their way through the interview their area was a ticking time-bomb. With a plethora of frameworks it is very hard to interview screen technically; unless an expert is used for interviews bluffers can be very hard to weed out.

Some Stability with .Net: 2002-> 2007

My first .Net project failed outright, but was my only outright failure with .Net until ~2012. No one understood the technology on our first project; about a year in I was making great breakthroughs, solving most of the long standing issues. Unfortunately due to missing long-term deadlines our strong manager was ousted was replaced with a ‘yes-man'; we disagreed and he soon ousted me. That project failed within a year and I received several supportive emails from client staff. Approaching $10m dollars of tax-payer money was wasted, subsequently I have read many news stories slamming IT at that major branch of the Government (employees ~300,000 people).

Once up to speed with .Net virtually every project called me a ‘superstar’, ‘insanely productive’ etc and I did not see a single failure.  Unfortunately there was plenty of evening and weekend work to extinguish fires/ meet deadlines.

Why were these projects all successful? We knew the entire technology stack. In particular I knew .Net and Oracle/ SQL Server very well; this enabled extinguishing fires quickly and permitted time to educate their developers. Many were ‘Over-the-Top’ thankful to me for taking the time to assist them (I was glad to help/educate!).

Stress with .Net: 2007 -> Present

By 2007 I still had no failed .Net projects where I had control but most were stressful; typically from overwhelming amounts of evening/ weekend work.

In ~2007 the .Net market really exploded. This caused two major issues:

  • Quantity of frameworks sky-rocketed
  • Quality of frameworks  reduced
  • Hiring quality became became harder

By 2007 most projects I arrived at had a fair number of frameworks/ tooling: Microsoft Patterns and Practices, ASP.Net Membership Provider, Log4Net and NUnit were particularly popular. As time progressed ORMs came into the mix; I have worked with at least five ORMs so far. Templating/ code generation frameworks and IDE plug-ins like Re-Sharper were also popular.

Few new technologies save time in the short term. Most frameworks/tools only reduce time/money/complexity the second or third time used. This is a well documented fact. A rule of mine has to been to never use more than two new technologies on a single project. Learning curves and risks are just too great.

~2007 -> 2012 was my ‘Project-Rescue Phase’. The majority required enormous effort to understand the technology stack and generally battles with managers/ architects to stabilize. Typically removing unnecessary technologies and performing major refactorings to simplify code/ make it testable via Continuous Integration.

On my final ‘project-rescue’ contract: within three weeks we met a deadline despite the team having zero to show for the previous three months. Their Solution Architect left them a high level design document littered with fashionable buzzwords; nothing useful had been produced. Two previous architects made no progress; including the original solution architect. I was their forth architect in about three months. One other developer and I began from scratch coding everything to meet phase 1 in three weeks; the other five people did little but heckle. The consulting company I assisted was still being difficult so I left them to it; they lost that client. Being fair they put something into production, but it took three times than longer and was a terrible product. They burned through three more architects during that time.

Performance Tuning: 2010

Around 2010 I advertised for and landed about ten performance tuning/ defect fixing contracts. Massive stress but great intellectual challenges fixing issues customers could not squash. I had a 100% success rate with these, taking a maximum of four days :)

An opportunity to observe many systems over a short period. Unnecessary Complexity was the only constant. Frameworks and trendy development techniques being primary offenders. One customer had a mix of Reflection and .Net Remoting that hindered debugging most of their code base for years. I removed that problem in ~twenty minutes which stunned their coders – they were amazing with wide open eyes and gaping mouths cartoon style :) [Topic-change: this is where experience counts and that minuscule piece of work was in the 100x Developer zone. Such times are rare, do not let anyone tell you that they are consistently a 10x developer.]

100% Failed .Net Projects: 2012 -> Present

2012 forward I decided to stop the ‘workaholic rescue’ thing and try to talk sense into managers/ architects/ stakeholders. This was ineffective. Two projects failed outright and the third is a Death-March (Stakeholders believe all is fine as they ‘tick-off progress boxes’ but reality is a long way from their current perception).

Two of these projects suffered from ‘resume-driven architectures'; the other classic wishful-thinking timescale-wise, but they hired about two hundred Indian contractors to compensate which always works (not). I was tempted to give each member of the leadership team three copies of the Mythical Man-Month; three copies so they could read it three times faster.

Quantity Up/ Quality Down for additional Frameworks and Tooling:

From 1989 -> ~1998 the number of technologies was modest.

About 1999 the Internet-Effect really began. Ideas took center-stage in the early days. Certainly in the Java world many were rushing to use latest techniques they had just read about online: EJBs, J2EE, Distributed Processing, Design Patterns, UML etc… Most teams were crippled by senior staff spending their time in these trendy areas rather than focusing on business needs. This coincides with the beginning of my ‘project rescues’ and being told way too often that I was “hyper-productive compared to the rest of the team” (that has never been an aim).

By ~2007 open source frameworks and tools were center-stage in most projects. Since then we have seen exponential growth and large companies (Sun, Microsoft, even Google) proliferated their dev spaces with low quality framework/tool after low quality framework/tool presumably hoping some would stick. Apple is one of the few to exercise real restraint. The Patterns and Practices group within Microsoft is particularly shameful example most of us are familiar with.

Over Twenty Languages/Frameworks/ Tools is now common?

It is tough to single out one project, but below I quickly listed forty-two basic technologies/ core concepts a sub-project at one company used:

“VS 2012, .Net 4+, html5, css3, JavaScript, SPA/ MVC concepts, Backbone, Marionette, Underscore, QUnit, Sinon, Blanket, RequireJS, JSLint, JQuery, Bootstrap, SQL Server, SSIS, ASP.Net Web API, OData, WCF, Pure REST, JSON, EF5, EDMX, Moq, Unity DI, Linq, Agile/Scrum, Rallydev, TFS Source Control, TFS Build, MSBuild scripts, TFS Work Item Management, TFS Server 2010/ 2012, ADSI, SSO, IIS, FxCop, StyleCop, Design Patterns (classic and web)”

Beyond the buzzword virtually every area of the application was custom implemented/ modified framework rather than taking standard approaches. It was certainly job-security and helped lock out anyone new to team.

That is just one of three projects since 2012 where I “complained three times, was not listened to so walked away as calmly as possible rather than fight like I used to”.

Resume-driven Architectures / Ego-driven Development

Why do most contemporary projects so many frameworks and tools? I see three key drivers that happen during the Solution Architecture phase:

  • Strong External Influence
  • Resume Driven Architectures
  • Ego Driven Development

Strong External Influence is a key driver. SOA appearing on magazine covers, Microsoft MVPs all singing the same tune etc. Let’s look at how these work.. What appears on magazine covers is driven by the major advertisers. SOA sold more servers, hence it was pushed on us. Many friends are MVPs so I must take care in explaining them: many try hard to stay independent but most are influenced but their MVP sponsor to publish material around certain topic. Over the years I have seen many lose their MVP status and generally it was after outbursts against Microsoft, or they stopped producing material that Microsoft Marketing wishes to see. Apologies to MVPs, but you all know this is the truth.

Resume Driven Architectures is the Solution Architect desiring certain buzzwords on his/her resume to boost their own career, and/or being insecure about finding their next role without the latest buzzwords. On one project I had to leave early the Solution Architect mandated an ESB for a company with under 2,000 employees! Insanity. Of course they failed outright, but not before going three times over schedule and having almost 200% turnover in their contract positions during a six month period! It is not fair to single out any one individual; we have all seen a platoon/ company sized number of such people over our careers.

Ego Driven Development: Bad managers tend to compare ‘number of direct reports’ when trying to impress one-another. Bad architects do the same thing, just with latest buzzwords.

What needs to happen?

With fewer technologies one or two key players can learn them all and stabilize a project. This is not feasible with over forty technologies.

Already in the JavaScript community we are seeing a backlash against needing large numbers of frameworks, but this is causing further fragmentation. A core concept of AngularJS (and others) is to not rely on a plethora of other frameworks. Of course early stages of learning ‘stand-alone’ frameworks like AngularJS are tough. Frameworks generally do not save time until the second or third project we use them. We could learn AngularJS but what if our next project does not use it? Time wasted, likely no efficiency gained.

No-brainier: Reduce additional frameworks

Doh, virtually all of us realize this! The problem is how? Personally I am learning AngularJS and node.js with the intent of waiting until a suitable position appears. This approach vastly limits the projects we can work on.

Personal experience shows that once a project with a vast number of frameworks is given the green light it takes a Herculean Effort to change even tweak its Solution Architecture. As independent contractors we can avoid clearly-crazy projects including those loaded with buzzwordsUnfortunately that limits the projects we can work on. It may keep us sane though; I believe:

“Beyond two new technologies project success is inversely proportional to their combined complexity”

Avoiding clearly-crazy projects means we avoid more failed projects so could be wise choice in the longer term.

Ensure the Solution Architect implements

I used to believe this was a perfect solution, if only Architects had to build what they preached then they would constrain Architectures to the bounds of reality. Unfortunately this appears not to be universally the case; perhaps it reigns them in a small amount? I have witnessed more than one case of someone attempt to implement their own bizarre architecture.

In the worst case having to implement themselves, must reign in an Architect’s craziest ideas. Personally I shy away from pure architecture, especially on multiple teams. Before now when under pressure I have resorted to semi-bluffing and palming quickly cobbled up ideas off onto others safe in the knowledge I did not have to implement. I soon became cognizant of what I was doing and brought it to a swift halt. Do you think others will be so honest? Let’s try to ensure Architects are on implementation teams.

Ignore vendor influence

Guy Kawasaki has a lot to answer for. IT vendors have long tried to sell us what we do not need, but Guy Kawasaki introduced many techniques we see today.

Attended a free conference or user group with quality speakers lately? Receive free trade magazines? Java ones are funded by larger players in that space, Microsoft ones often by (indirectly) by themselves. There is great value in these resources, but please keep your eyes open for manipulation.

SOA is a primary one I refer too. Magazine after magazine had SOA emblazoned their front covers, many conference talks were around SOA.. it became the buzzword du jour for years. SOA used conservatively is fantastic, but from about 2002->2010 I saw project after project with SOA sprinkled around as if salt from a large salt-shaker. Re-factoring to remove/ short-circuit SOA was a key technique of mine – ‘strangely’ removing much of it led to much maintainable and performant code. Why was SOA so heavily hyped by our industry? Distributing code leads to more servers; which increases hardware sales and more importantly server license sales. Server license sales are where the big-players make their real money. Costs of even a smaller companies SQL Server or Oracle licensing soon ring up to millions-of-dollars. High costs accompany CRM, ERP, TFS, Sharepoint and most other common sever based software.

Younger Architects are particularity susceptible to vendor influence. Younger people are more easily influenced, tempted by implicit promises etc and soon saddle their projects with many trendy buzzwords. How could a project possibly fail if every buzzword is hot on Reddit and our vendor representatives cannot stop talking about them?

Embedding consultants/ evangelists into large companies is very common. Received free conference tickets from a vendors? Free training and elite certifications? Sorry to lift the curtain, but clearly these are tricks which exist to coerce you into using a particular technologies.. and buy more severs! The consultants and evangelist are of course generally not evil, but they are trained to believe in what they are selling.

Become a Solution Architect

Being the Architect certainly works. Every project I had significant control over was a tremendous success. Unfortunately most projects select their Architects based on popularity with management and other non-technical attributes.

Frequently senior leadership believes Solution Architects should manage multiple projects and not be hands-on with code. This is a mistake. Personally I have turned this role down several times as it leads to poor Architectures – as stated above I have caught myself bluffing before in this role. Solution Architects should not span multiple projects. Staying hand-off for long leads to believing marketing of technologies. Marketing is often far from reality.

Most companies in-house Architects tend not to be the strongest technically. Senior leadership looks for softer skills – can they convince/ bully others, have a large physical presence etc. Notice how often we see tall white male Solution Architects? ~80%+ of the time yes? When this is not the case the Architect is almost always technically sound – because they attained the position on technical merit. All too often leadership looks for someone ‘with weight to throw around’ – at the Fortune 10 discussed above virtually all ‘thought leaders’ in our department had a large physical presence, and shouting at subordinates was second nature. It was amusing when I was asked to review the work of the two worst because so many people were complaining about them.


Hopefully it is clear that we must reduce the number of technologies in our projects. For the foreseeable future it is unlikely we can return to the stability/ predictability of pre-Internet/ tech-boom days.

This post is far longer than my notes/ original outline predicted. In future I will partition posts into more digestible subtopics with more focus on how we can improve. There is good information here, so under time-pressure I decided to publish as-is. Being between contracts AngularJS and node.js are calling my name. These two appear the most likely to emerge as victors from current JavaScript framework fragmentation.

Author: "dotnetworkaholic" Tags: "Development (General), Management, Produ..."
Send by mail Print  Save  Delicious 
Date: Friday, 30 May 2014 23:05

Google has never linked me directly to this information, just theories. One day time permitted me to run careful tests so I am sure these techniques are correct/ efficient:

Simple tricks to reduce size VM’s Disk Needs:
These will wipe GBs from your vmdk/ vdi.

  • Disable Windows hibernation (hiberfil.sys is the size of installed memory, you don’t need it)
  • Disable the memory paging file (paging file in a VM makes little sense to me)

Compact a VMDK (VMWare including VMWare Player):

  • Ensure you have no snapshots (as of writing compacting does not work with snapshots)
  • Launch the VM
  • Inside the VM defragment its disk (defraggler works great, Windows degfrag is ok)
  • Inside the VM run “sdelete.exe -z” from DOS (as admin). This zeros out the free space and is an essential step
  • Shut down the VM
  • From VMPlayer: Edit Machine Settings -> Hard Disk -> Utilities -> Defragment (optional step, sometimes helps – official documentation is poor)
  • From VMPlayer: Edit Machine Settings -> Hard Disk -> Utilities -> Compact

At the final step you should see a huge reduction VMDK size.

Below is a screenshot showing the features in VMPlayer. Remember this is next to useless unless you run Mark Russinovich’s “sdelete.exe -z” to mark free space with zeros. Compacting VMs has been this way for years, it’s April 2013 now and surely soon ‘detect and zero free space’ functionality will be built into their compact options.


The image above shows a VM that reached 20Gb once, before being compacted back down to 10.7GB. These are typical results. Once compressed my two work VMs zipped down to ~4GB each; fine for archiving working databases, dev environments etc. One customer’s backup procedures left me concerned so weekly the VMs were AES encrypted and copied to a USB key chain flash drive.

Compact a VDI (VirtualBox):

Until very recently I have used VirtualBox since about 2008. Here are the steps to compact it.

  • Ensure you have no snapshots (as of writing compacting does not work with snapshots)
  • Launch the VM; inside the VM defragment its disk (defraggler works great, Windows degfrag is ok)
  • Inside the VM run “sdelete.exe -z“. This zeros out the free space and is an essential step
  • Shut down the VM
  • From DOS (as admin):
    • cd <location of your VDI>
    • “C:\Program Files\Oracle\VirtualBox\VBoxManage.exe” modifyhd <your disk’s name>.vdi –compact

Hope this helps folks. Any issues/errors please post in the comments and I’ll update the post.

Author: "dotnetworkaholic" Tags: "Technology"
Send by mail Print  Save  Delicious 
Date: Monday, 03 Feb 2014 08:59

It has been fun. Following a month ramping-up on JavaScript, AngularJS, Node.js and Git conclusions are:

  • AngularJS looks great
  • Hold off on Node
  • Keep JavaScript close, but not a best buddy
  • JavaScript transcompilers are promising
  • Git is really easy to install/ use; embrace over SVN for disconnected commits/ simplicity

Surprisingly easy to get up and running with Node

It is surprisingly easy to get up and running with node, see the below screenshot for what took a little over an hour after deciding to install Ubuntu and use Eclipse as an IDE for node development:


Git is trivial to install on Windows/ Linux. It took minutes to create a Git repository in Dropbox.

What is Node.js

Node has a lot of what you are already used to. For almost everything we can do in .Net there is a corresponding Node package. Socket IO, http communication, async, MySQL provider and so on. These are called modules in Node; installed trivially using the Node Package Manager (npm) via a terminal prompt. Many popular JavaScript libraries are also available as Node Modules: Underscore, Mocha and CoffeeScript being particularly popular.

Node literally has one thread and an event loop which cycles through pending events. This means any part of your codebase can block other requests. A major difference is the style of coding in Node. ASP.Net etc are implicitly multi-threaded, pre-empting threads to ensure each server request gets its share of CPU time. Node code must be crafted in a manner so-as nothing blocks a thread.

While learning Node, much tutorial code required piping streams and nesting JavaScript callbacks. Apparently most Node code is like this. Such code soon becomes difficult to follow and comprehend. With familiarity this will improve, but well crafted OO code will always be easier to understand.

Why Node?

JavaScript is everywhere; many developers know JavaScript so why not use it server-side too?

We write validation logic in C# on the server and JavaScript in the browser. Using Node we can reuse the code.

Performance is a huge seller. Apparently Node.js is bad-ass rock star tech that can blow ASP.Net etc out of the water performance-wise. Let me debunk this: performance is an area where I really kick-ass; have tuned many systems (small and large) generally seeing ~100->400 times improvement under load with surprisingly minimal tweaks. Most were systems that had already been tuned.

Performance is a function of your developers and/ or having someone on staff who understands performance holistically. Do not select a technology because its theoretical maximum load is 20% higher. At a fortune 10 I tuned two maxed out Datacenter installs (~thirty machines each) to all the machines using almost zero CPU. Undoubtedly several people spent weeks or months analyzing which machines to buy for ‘peak performance’. Architecture, sensible implementation and tuning are where real performance gains are found.

Performance of Node can be killed by any one bad section of code. Tools to tune Node are very immature. With .Net we use WinDbg/ sos.dll to analyze production systems – it is very difficult to analyze Node in production.

JavaScript is the Future?

As many tech friends said: on my third read of JavaScript the Good Parts it really made sense. Quality coding can be achieved in JavaScript but it is far from a perfect language.

Google’s DartMicrosoft’s TypeScript and CoffeeScript all bring real OO concepts including classes and even static typing to JavaScript. Currently they transcompile to JavaScript. Within five years expect a language in this category to have gained traction + adopted into all browsers. Current versions of all browsers self-update.. Once most of the world is running self-updating browsers it becomes possible for new standards to roll out quickly. Powers that be in the Internet world will settle on a standard; that is why Microsoft threw TypeScript into the ring.

JavaScript was cobbled together quickly in 1994 as a scripting language for Netscape. It is very weak and will eventually be ousted. Transcompiling is an intermediary step.

Final Conclusions/ Predictions

Node.js is hot today, but not a good fit for the kind of applications I personally work on: large systems with a traditional RDMS back-end and a lot of inter-system messaging to slow legacy systems.

Node.js is helping building a great base of JavaScript’s frameworks for the enterprise. Be-most are from small untrusted sources. It is only a matter of time until a serious security breach occurs via someone slipping malicious code into an open source JavaScript library. Once a high profile incident occurs the JavaScript community will figure out how to mitigate such attacks.

A Node.js rival with multithreading will appear, or Node itself will be extended. Ruby gained multithreading; after years of its user base stating single threaded web servers are fine.

JavaScript will morph into a real language within five years.

Author: "dotnetworkaholic" Tags: "Development (General), Technology"
Send by mail Print  Save  Delicious 
Date: Saturday, 01 Feb 2014 15:10

Productivity Distribution within an IT Team

For decades we have been taught that productivity is a Gaussian Distribution (bell-curve) with most team members contributing about average performance:

Gaussian Distribution

A recent article in the journal of Personnel Psychology caught my eye. It argues performance generally follows a Pareto distribution:

Pareto distribution

This much more closely correlates with what we often see in IT. Most teams have a minority of amazingly productive members while the majority unfortunately commits little. This is crux of the issue I am trying to analyze and address. Often most of the ineffective people do have the ability but lack the motivation. Many classic texts repeatedly state that in our field human motivation is the single, strongest contributor to productivity.

Educating and motivating moves many members from the left of this curve towards the right.  I have done this many times; transforming what management saw as weak people into confident and able producers. This curve is always going to exist, but we can much to reduce its height; by bringing more individuals to the right.

I have lingered on the left of the curve for too long several times; it is an unpleasant place to be and personally it is intolerable for an extended period. Everyone joins towards the left and progresses to the right as they learn the stack and are better able to collaborate with the rest of team. Not everyone has the technical skills, people skills, experience or raw dedication to move to the right; this is where leaders should step-up, helping team members improve and not hindering ones they disfavor (color of skin, educational background, physical appearance etc – we see it all).

Many will disagree

Non-technical managers and Scrum Masters often resent the hyper-productive and coined a derogatory term for them: hero-developers. Googling this term finds much derision accusing them of being bottlenecks and a worst practice in Agile.

Story time.. (yes, I have lots of stories): On a recent Fortune 10 engagement I learned our team of ~thirty previously had one member who could dive anywhere into our stack; virtually everyone said he held the project together. Mere mention of his name to their ex-Manager made him angry; “I am, sick of hearing his name” etc. Many told me that the manager (now senior Director) drove the hyper-productive developer out of the company. On my arrival the project was very much in disarray with shockingly serious production issues; issues the entire team could not solve, and had not solved for over a year.

But they are wrong

Once ramped-up at the Fortune 10 I methodically worked through and solved all the systems serious production issues. It took many months of evening and weekend work before I was able to effectively and efficiently attack larger problems. This was an absolute extreme in my own career and after eighteen months on that project I was absolutely exhausted. Evidently a ‘hero’ was needed, but trust me that no one wants to be one for long – it is too draining. We should avoid the need for heroes by enabling others to be more productive.

Technical types will guess a key to that staggering success was becoming familiar with their full stack, acquiring access rights to sub-systems and establishing enough trust to ‘step on toes’ without serious political backlash.

Why was that particular project in such trouble?

That particular project is at the extreme of what I have witnessed, but serves as a useful example.

Essentially non-technical manager after non-technical manager forced great engineer after great engineer out of the door. Despite ‘saving the day’ again and again even my own manager fought tooth and nail against most initiatives I proposed, despite backing from virtually the entire team. This division was an extreme and has now been dissolved at that Fortune 10; their own politics almost certainly led to their demise.

A salient point is that everywhere I observed failure this category of culture existed – many potentially highly productive members kept their heads down and pretty much just stayed around for the paycheck hoping their next manager will be better. Those are almost direct quotes from several (very honest) people!

“Not a Team Player” – common accusation from weak leadership

Many reading may be thinking that I am not a Team Player.. There is a great image doing the rounds on social networks:


Which can be countered with the following memes:


Unfortunately it does not get much better after graduation!

This would work better if it could convey the manager as intimidating the ladders and actively preventing them from performing their usual function.

Who is Right? What to do?

Almost everyone laughs at the above images; because there is uncomfortable truth in each extreme.

Common sense suggests establishing measurable metrics you believe are critical to project success. Decide who the producers are based on those metrics, not who takes you to lunch most often, is the most agreeable, appears confident etc.

One simple metric I use is to scroll through source code commits and look for the names appear most often; one or two names generally make up the bulk of commits. Think twice about sharing your metrics with the team; political players will game them. Of course watching for gaming could be a metric you keep to yourself :) Over the years I have seen several play ‘a numbers game’ for measurable tasks like fixing defects; they select only trivial defects and even then many of their ‘fixes’ are re-opened. This can be tricky to detect; one particular ‘gift-of-the-gab’, ‘charming to most women’ etc individual would take the QA team (both female) for lunch/ dinner and ensure that they never re-opened his defects but raised new ones.

Long term team/ project success

Too often on software development projects we see a very uneven distribution of work. When this is addressed we see very successful projects. I have done this many times, but unfortunately many times told “when you left the project/ team fell apart”. In this series of blog posts I hope to make headway into creating teams that sustain themselves even after the ‘hero-worker’ has departed. A gut feeling is that those teams had no-one else with courage/persistence to challenge poor decisions. Most in our workforce do posses the necessary technical skills (or at least the ability to learn); problems lie elsewhere. Motivation is certainly a common one.

Observations during Success, Total Failure and Death Marches

With little deviation this is what I have observed in various categories of software development projects:

Successful Projects:

One or two very productive people drive the team. Viewing the commit logs show a vast percentage of commits are from one or two people. Generally they work many evenings and weekends. Hero-driven development is often used a derogatory term. In reality on a very successful project virtually all ‘heroes’ try to spread their knowledge to take pressure off themselves.

Why does the ‘hero’ thing work? For years in interviews I summarized myself as someone who ‘pulls the strings together’. Being able to traverse the full-stack and nail down problems fast is key. Without at least one full-stack engineer teams fall victim to finger pointing between knowledge silos and technical debt rises fast. Siloed development leads to many issues no single sub-team/ person will take responsibility for. Such problems tend to linger.

Highly successful projects occur when there are several highly effective members who can bring others up to speed. At the Fortune Ten discussed above my second project had another very talented .Net architect; our productivity was amazing and we brought at least one other architect up to speed (it was a cloud based framework so everyone had 20+ years of experience, but all of us lacked true full-stack knowledge before the collaboration). Good collaboration and decent motivation meant we did not work evenings or weekends, yet the project ran smoothly.

Unsuccessful Projects:

Since about 2012 I left projects very early if they required intense ‘hero-driven development'; all three projects I left failed/ are failing. Prior to 2012 I stayed at many disasters, performed the ‘hero-developer thing’ and rescued several projects. This led to varying degrees of burnout.

Death March Projects:

This is the category of project that generally hits production but is way over budget and schedule; people are exhausted and operational costs sky high.

Very large companies tend to operate this way; they virtually always get systems into production because they have the financial resources/ big-name pull to keep hiring through the code-and-fix phase; even when several times over the original budget. When close to failure often they hire an army of contract staff; generally a small subset of those are ‘genius-level’ who stabilize the system (and enjoy charging many hours of overtime!). This latter solution is something I saw often early in my career working as a junior developer for very large companies.

Over the years I mostly kept a low profile on such very large projects (politics were overwhelming). A key observation is that potential heroes were trampled on and generally left the project/company or retreated to the safety of a development silo. This was most evident on Government Projects.

But my projects have never failed!

Do not hire anyone who states this. In email correspondence with leaders of projects that failed many state that it was a success. Bizarre, some seem to believe their own lies.

Everyone with experience has seen several failures because most IT projects fail.

How common is failure? A recent study by Computerworld reveals that 94% of IT projects in the past decade with budgets of greater than $10 million, in government and out, launched with major problems or simply failed.

My experience with large companies and government organizations is that ‘tall blades of grass are lopped off’ before they ‘take over the garden’. Recently on 300+ person project cost exceeded a million dollars per screen delivered! I expect that project will make it to production, but at enormous cost with major issues. They are a gigantic, cash-rich company but the managers/ Scrum Masters were not tolerating stand-out individuals (other than themselves of course). Apparently their vision is a Software Factory – masses of cheap workers churning our code at a consistently high quality. Wishful thinking; this is nowhere close to reality in software development.


Bringing others up to speed is a key point here.

Hopefully we can accept that distribution of effectiveness is closer to a Pareto Distribution than Gaussian. Once accepted we can target reducing the height of the distribution and head towards success with reduced chance of project failure/ serious overruns.

Author: "dotnetworkaholic" Tags: "Development (General), Management, Produ..."
Send by mail Print  Save  Delicious 
Date: Sunday, 26 Jan 2014 16:13

Using DOS and Windows since the mid 80’s it was time for a change. Vista was never reliable on any of the three machines I tried, Microsoft fanboys were killing creditability in the user group scene – heck it got to the point where saying ‘Google’ was not permitted. It would immediately be corrected to ‘Bing’, sometimes by a chorus of fanboys! This anti-Google sentiment has to stop. Fanboys may go “Rah-Rah, Bing-Bing-Bing”, but how many of them command respect from peers? Many competent people stopped attending Microsoft events.

So how did OS X work out? Well it’s certainly a good operating system, does most things I need but obviously is not going to run Visual Studio anytime soon. Lack of open source software was a major gripe; 7Zip, KDiif3 and many other great open source projects just don’t exist for the Mac.

Snow Leopard was a total flop, costing me hours in lost time as it broke our HTPC which was running Hulu and XBMC. Initially Mac fanboys jumped all over  me for criticizing it the day after it was released, but over time the general consensus is that Apple needs public beta testing before releasing an OS upgrade. A few service releases later Snow Leopard works fine but lacks the snappiness of Win7.

Apple hardware is fantastic. Developing on a MacMini is heaven thanks to virtual silence. The tiny form factor helps declutter workstations, keep a clearer desk etc. Using a MacMini as a HTPC is a little expensive but totally worth it, low power means low heat and they’ll happily live in a cupboard. New Minis also support two digital monitors, IR remote, Bluetooth, latest WiFi, GigaBit internet and have a stack of USB ports. The 13″ MacBook Pro cost $1200, plus the cost of aftermarket 4GB RAM and an Intel SSD. With the SSD and Win7 it’s plenty fast even for a demanding developer. The quality keyboard and touches like back-lit keys, multi-touch track-pad etc make it easily worth the extra cash. Oh and using OS X battery life is nine hours for the latest model – I see over five hours with WiFi and Bluetooth on a 2009 model.

So why the move to Win7? “It just works” scream most Mac users when you ask them “Why a Mac”. True for basic users, but not people like us. Hours can be wasted with simple tasks like trying to format a non Apple external hard-disk. This where the experience breaks down. Problem with Windows are generally solved with a quick Google search. Certainly not the case for OS X, with the hard disk users were berated online for not buying a Apple branded hard disk. I have Bluetooth problems with a Microsoft mouse and have never been able to resolve it other than rebooting. Do you use two monitors? Fine, that works… oh you have one in portrait mode (like I do)? Is not going to happen in OS X yet, sorry. HTPC? What you did not buy an Apple TV unit? The Mac Mini work well as a HTPC but does not support font scaling like Win7. I found a hack which works in some cases not other. Of course almost those cool HTPC open source tools don’t exist for the Mac, ironically XBMC is one that does and it’s almost as stable as for Windows.

Hopefully this posts helps you consider if  OS X is for you. It’s a good OS, but Win7 is so much better in so many ways. If you want something that “just works” for simple tasks I highly recommend an iPad. That device is so simple I bought another for my Parents. They grasped it quickly and are having few problems. Also the design of the iPad apps means it’ll be very hard for bad guys to devise a virus for them. Macs don’t get PC viruses they get Mac viruses; I would wager the iPad will be virtually virus free.

Author: "dotnetworkaholic" Tags: "Technology"
Send by mail Print  Save  Delicious 
Date: Sunday, 26 Jan 2014 16:05

Virtually all software developers eventually experience neck and/or back pain. Mine gradually increased from light neck pains in 1995/6 to being unable to work one day in 2001. Doctor’s advice: “stop doing what makes it hurt”. Useful…  Earlier today Martin Fowler posted “Back pain is a common issue, but everyone’s pain (and treatment) is different”. Indeed it is, but he also posted a photo that invoked a “Fingernails dragged down a blackboard” response from me. It’s a photo of programmers at work; many IMO asking for neck/back problems in later life.

Let me share the research that has kept me pain free for ten years. I am not qualified in this area, these are just the findings of a long term computer programmer (done little but code from 1981 to 2010, and hope to keep it up until The Singularity makes us obsolete) :

  • Move your screen(s) up to eye level
  • Every time you exercise do neck stretches
  • Read a book on back pain and/ or neck pain.These cover common issues that work for many people; Read the many glowing reviews on Amazon

Screen at Eye Level:

This should be common sense. I work looking forward not down. Peering down compresses neck vertebrae – probably not good for extended periods of time. Commons sense says “mix-it-up if you can”, don’t sit in the same position all day. Personally I alternate between a regular sitting workstation, standing workstation, laptop on a box (or whatever’s handy at the time) and casual surfing using an iPad like a book (not looking down at a laptop). Combined with regular exercise and stretching I still spend almost all waking hours in front of a computer. Of course from time to time I become lazy, and stop stretching after running/cycling; the pain starts creeping back. Returning to regular stretching always cured it (so far, touch wood!)

Sitting workstation:

I use an ergonomic Zody Chair using vesa arm mounts. Yes, I have hauled these to client sites. We just moved house and I don’t have a photo handy

Standing Workstation:

These can be cobbled together very cheaply. Skip those expensive stand/sit combo workstations and build another work area in your home office. The following photo shows a $100 Ikea kit. Notice the two mice? I used to have pain in my mouse button fingers. Learning to use a mouse left handed and swapping between them cured that too. Props to Paul Swan for the mouse tip – he’s a total Genius from my undergrad CompSci degree, now working on the Windows Server team.

Laptop on a box:

The title of this post.  Being in my late thirties peers are starting get aches and pains. Many on Facebook complain of sore necks from laptops. If you listen to only one piece of my advice, Put Your Laptop on a Raised Surface when using it. Oh, and wear sunscreen :) Notice I use a real keyboard and wireless mouse than can be used in either hand – these cost peanuts compared to a Doctor’s visit. This is a great setup for short term client engagements – they always have something to stand a laptop on.

Can a $10 Book from Amazon really help?

The books I purchased in 2001 were an incredible help. I am not suggesting these as an alternative to a Doctor’s advice, just worth considering if your Doctor has been of no help.

Hopefully this post allows some to extend their coding careers. Please take this advice as just that, general common sense advice.

Author: "dotnetworkaholic" Tags: "Other, Productivity, Technology"
Send by mail Print  Save  Delicious 
Date: Sunday, 26 Jan 2014 15:41

Thinking of buying a 3D Printer? Let’s give that thought a quick reality-check… I was ready for a somewhat hobbyist experience but it took way longer than expected to have the printer assembled and printing acceptably.

Mine is a $549 Printrbot Kit. Interestingly perusing forums of the current generation $2,200 MakerBot Replicator 2 showed their owners experience similar issues to what I’ve gone through.

Assembly of the kit took five to six hours. Until it printed reliably using PLA was probably a further forty hours but I did fabricate a new platform and adjustable bed. If time permits I’ll follow this post up with common issues + solutions so maybe you’ll be up and running in under twenty hours. Either way, expect to become a 3D Printer technician and be sure to have an abundance of patience handy.

The image below sums up the experience nicely. See the tools? Notice the spool holder made from DIY parts? Surprised to see DIY screws, power drill etc nearby? Well… forums and blog posts are littered with people who purchased a 3D printer and abandoned it before making good prints. Surf some forums and you’ll notice virtually everyone showing off their prints has their printer in some kind of hobbyist workshop; typically they’ll have loads of tools around and are veterans of past fabrication/ advanced DIY projects.

Complete Printer (1) (Medium)

But apparently a child can build one? Yes, some manufacturers are suggesting you buy one for your eight year old and he’ll have it assembled and printing in no time… Not a chance! I did see one blog post where a young boy had assembled the kit but “so far they were having trouble printing”. Younger than twelve I’d say to totally forget it and buy him/her a Lego Mindstorms or similar. High school age is probably more appropriate; even then a tinkerer Father on hand is almost a necessity.

Should I spend $550 or $2,500?

Now my $549 Printrbot is dialed-in and has had a few modifications its prints are excellent. Given the explosion in 3D Printer popularity, anything built in 2013 is going to look like a Dinosaur in a few years. Unless you need to print very large objects now I recommend buying a lower priced printer today and upgrading in a couple of years when mass production techniques should bring costs way down and take quality/ ease of use way up. I purchased Make: Ultimate Guide to 3D Printing for $6 as they reviewed about fifteen current generation printers.

Should I buy a Kit or Fully Assembled?

Assembly is typically only ~$100 extra, but the experience [frustration!] of building your first printer is invaluable. I was too young to build my first home computer in 1981 so was bought an assembled one. I did not make that mistake this time; the kit came with poor/incorrect instructions, but it was the correct route to take for both experience and bragging rights in twenty years time.

What to expect from a Kit?

Until the last couple of years most 3D Printers were built from open source online designs. Hobbyists sourced components themselves and spent unbelievable amounts of time tweaking. My PrintrBot is a well priced kit building on these many years of open sourced achievements. I doubt there is much profit margin for PrintrBot. Also, you will have noticed a lot of plywood on my printer; that’s laser cut plywood. It has issues but is an inexpensive way of producing the parts – once a laser printer is purchased the cost of chassis etc is next to nothing for the smaller scale manufacturers compared to the material/hours required to 3D Print parts as was common in the early days.

A kit gets to you to where these serious hobbyists were far quicker for a reasonable price.

Take a look at these pictures for an impression of what to expect:

Printrbot Kit (1) (Medium)

Printrbot Kit (2) (Medium)

Their website says it takes two hours to build. Perhaps once I’d built a few that would be true. In reality it takes about five hours. Documentation is poor and was incorrect in several places as incremental design changes have occurred since the videos/ online help were created.

How long until Printing is possible after Assembly?

Likely some owners obtain a decent print on their first attempt; most do not – actually possibly most never do before putting it in a closet or on eBay! Expect several hours to several before anything reasonable is printed.

ABS filament emits fumes which made my eyes sting, but is much easier for a beginner to use than PLA filament. Since my workshop is small and unventilated I have moved exclusively to PLA, ensure your printer is in a well ventilated area and start with ABS.

Hopefully time permits me to write a separate post on getting started, but main tips are:

  • Level the bed
  • Ensure z-home is set correctly (distance from extruded nozzle to the bed)
  • Ensure the printer extrudes at the correct rate (pretty easy for ABS)
  • Figure out the slicing and printing software
  • Use calipers to roughly calibrate movement along the x, y and z-axis (fine tune when printing ok)

A common beginner issue is the clog up the hobbed bolt with filament (tension springs set incorrectly will do this).  I’ve had mine out for cleaning at least twenty times, but it’s been fine for a good while since I figured out the correct spring tension for PLA (springs compressed to ~13.5mm). The following image shows a clogged hobbed bolt being cleaned with a needle (I lost one needle so now store it using a strong magnet to fix it to a drywall screw):

Hobbed bolt cleaning (19) (Medium)

What kind of quality can I expect?

Quality is really good in my opinion. I’ve printed several printer upgrades and the precision is incredible; hex bolts drop right in where they should etc.

It will take a while and likely much frustration to get the printer dialed in. After about sixty hours I seem to have the basics down and when an issue occurs now know what to tweak. Take a look at the next photos to see my progression with PLA:

Example Printrbot Prints (1) (Medium)

Example Printrbot Prints (2) (Medium)

Example Printrbot Prints (3) (Medium)

The final print shown is a case for a Raspberry Pi. It fits perfectly, and this was printed before I added most printer upgrades!

Tools required?

Hmm… this is a tricky one. As a long time DIY type I have access to a vast array of tools. The Drill Press and cross vice shown below has been particularly useful but these are not beginner tools.

At a minimum you need:

  • Precision Calipers (only about $20 on Amazon)
  • Quality screwdriver set (one with lots of quality bits is fine)
  • Jewelers precision screwdrivers
  • Small wire cutters and pliers
  • Tweezers (to tease strands of stray extruded filament away from the nozzle and lift prints)
  • Very sharp craft knife

These are almost essential:

  • Quality precision pliers, angled pliers and wire cutters (Xuron or similar)
  • Quality oil/ grease
  • Circlip pliers
  • Telescopic Magnetic Tool to hold awkward nuts in place during assembly

Semi essential tools (1) (Medium)

Ensuring bolts thread perpendicularly:

Semi essential tools (2) (Medium)

Final Words

Have a spare fifty+ hours, $550 and buckets of patience? You should buy one now!

Remember this is not Software Engineering; interacting with the real world is a whole different ball game. Fellow classmates and I discovered this during our postgrad Robotics Degree . Software is predictable and repeatable. The real world often not so much. In many ways 3D printing is similar to robotics – some software is involved but there is a lot of trial and error + tinkering.

Author: "dotnetworkaholic" Tags: "DIY, Other, Technology, 3D Printer, asse..."
Send by mail Print  Save  Delicious 
Date: Sunday, 26 Jan 2014 15:22

In addition to Scott’s and many, many other 10x Developer posts here is an opinion from someone labeled ‘hyper-productive’ on most projects during the past twenty years.

Key Points

  • 10x is a zone we enter at times, it cannot be sustained
  • 10x has historically (in Mythical Man Month etc) meant the difference between worst and best developers
  • Being 10x better than average is a rare occurrence with short duration
  • 10x zone is only achieved after months or even years perfecting required skills/ preparing conditions for the highly productive period to commence
  • Many important tasks take the same/ very similar amount of time regardless of the individual – e.g. attending the daily stand up, liaising with another department, manual testing etc

Common Characteristics of being ‘in the 10x zone’

  • Task that does not lend itself to parallelism – i.e. efficiency is gained from reducing communication, discussing unimportant implementation details etc
  • Full stack developer avoids need to interface with knowledge silos – this is a very common with hyper-productive developers but often leads to resentment from the team unless great care/preparation is taken to handle political backlash
  • Using a library/ technique highly suited to task at hand – a great example is deserializing/ serializing xml by hand vs. employing a tool like xsd.exe or JAXB. Many thousands of lines vs. a trivial library call can lead to man-months of saved effort
  • Already familiar with business/ technical problem – second time around is always faster. By the third time most problems are solved many times faster
  • Bust through politics – managed to obtain access to all necessary sub-systems, and are shielded from political backlash of stepping on toes so can sustain the access

10x Projects are a far Bigger Deal

Over the years I have seen around fifty projects. Excluding the really crazy ones 5->10x is a rough measure for the difference in productivity we see at the project level. Probably more if one considers operational issues; too many systems are moved to production before being fully stable. Many projects fail, so technically there is an infinite difference in productivity. 5-10x is a rough approximation between ones that make it to production.

Common Characteristics of very inefficient Projects

  • Too many people – and/or too many of the wrong people
  • Cumbersome, un-enjoyable process
  • Developers treated as commodities – no praise for quality work
  • Better developers left the project early
  • Rude, non-technical leadership

Anyone with a few software development projects under their belt knows the main issues that lead to poor projects. Unfortunately after all my years in software development I am now of the opinion that many are inefficient intentionally by design. Yes, the leadership team actually desires this! A sizable proportion of senior management prefer to have very large teams. The root cause appears to be Empire Building; a larger team brings the manager/director more power. Many resist promoting others that could potentially challenge them, and several times I have witnessed life being made difficult for quality individuals with the sole intention of forcing them off the project. As time progresses leadership weakens and it can decimate large companies, especially during ‘tough economic times’. This is why many very large companies look outside the firm when time comes for a new CEO.


Obviously there is a massive productivity difference between the best and worst developers. That does not mean someone highly productive on Project A will immediately be highly productive on Project B.

A badly run project can cripple even the best individual’s ability to do great work. This is of far greater importance than bickering about if one developer is actually 10x or not. One great developer is just one great developer with a limited skill-set. The team wins the war, great developers are often decisive in key battles but they cannot win wars alone.

Developers have specialties. Personally I still struggle until up to speed with a new technology – it takes time and hard work to be ‘10x’ again with new technologies. Beware anyone who tells you otherwise, especially the ones who claim to be ‘10x in everything’, everyday on every project’.. That is not reality. Every project failure I have witnessed had at least one guy who would publicly state he was expert in a very long list of technologies.

Author: "dotnetworkaholic" Tags: "Development (General), Management, Produ..."
Send by mail Print  Save  Delicious 
Date: Thursday, 02 Jan 2014 14:05

This post will be of little interest to regular technical readers. Mefloquine is a huge deal those affected by its side-effects; hopefully Google will bring parties who need to know this information here.

Lariam is taken to greatly reduce chances of contracting Malaria; in my case for a vacation to Africa . It is taken once a week, starting three weeks before departing. The day the third pill was due my vision became somewhat distorted accompanied by bizarre thoughts.

For years most Doctors have told sufferers Lariam does not have lasting effects.

Fact 1 – Effects can last years or be permanent

In 2013 the FDA gave mefloquine a block box warning as follows:

“Neurologic side effects can occur at any time during drug use, and can last for months to years after the drug is stopped or can be permanent”

Fact 2 – Mefloquine damages GABA receptors

This has been key to my recovery and I was unaware of it until early 2013. Discovery (and its implications) took many hours of research. Virtually all useful information is buried in medical journal entries and neurology textbooks (read/searched via Google Books). Only later in 2013 had I discovered all the information documented in this post.

Astoundingly since at least 2004 science has know that Mefloquine damages GABA receptors. The authors of the original paper appear delighted because they now have a means to study GABA deprivation:

This presentation given in April 2013 led me to Google “mefloquine and GABA” which in turn lead to a treasure trove of useful information:

Medical science has only proved this in rodents but it almost certainly applies human brains. The effects of damaged GABA receptors tie in 100% with my own symptoms. Virtually everything I read in this area tallies with symptoms.

Unfortunately resolution of current MRI machines is too low to verify this damage.

Cyclic Symptoms

In 2001 every day or two I would cycle from feeling from ~100% normal to unbelievably terrible. This cycle has lengthen to many weeks (as symptom intensity reduced). It is not a precise schedule; even about four years in several months could pass with almost no issue, then wham!

Autopsies show mefloquine accumulates in the Limbic System; which resides on both sides of the Thalamus. Guess where the bulk of GABA receptors are… yup, in the Thalamus. Perhaps this is the root of our symptoms being cyclic; accumulated mefloquine occasionally leaves the Limbic System to wreck more GABA receptors in the Thalamus? That is a layman’s hypothesis; but it is noteworthy that mefloquine accumulates alongside where it can cause the most havoc.

Hope for New Sufferers

This is likely why you found this post. Everyone I have personally  communicated with has become a lot better over time. Apparently many fully recover in weeks or months.

Personally the first year necessitated time off work – which is why I took a second Masters degree. The first three months was insanity at times. Since the first year (or so) symptoms reduced pretty much linearly. Since learning about the GABA damage I found mechanisms to cope with current symptoms. Thankfully issues are now very mild compared to twelve years ago.

Personal Theories – derived from Internet based Research

GABA is key, but be mindful that medication that stimulates GABA uptake can be addictive. At my late stage over the counter vitamins and supplements work well to reduce symptoms.

A fellow sufferer shared an image of their neurotransmitter tests (shown below with permission). It indicates normal GABA levels yet very low serotonin and dopamine (show on a log scale). Medical papers/ book excerpts state that low GABA hinders serotonin and dopamine release downstream. As a layman I deduce that mefloquine damaged brains are not processing their GABA correctly; this dove-tails with the research proving mefloquine block the brain’s GABA receptors.

GABA Serotonin Mefloquine toxicity

What is GABA?

GABA is a neurotransmitter. Like most of neurology it is poorly understood, but appears to be a fundamental upstream neurotransmitter, regulating many of the brain’s basic functions/ states. Lack of GABA leads to confusion, anxiety, numb hands/legs (peripheral neuroapthy) and other nasty symptoms we encounter.

Think yourself well. Boy was I sick of hearing ‘mind over matter’ etc advice in the early years. Ask anyone who says this to drink ten Starkbucks ventis and try to stop their mind racing. That’s neurotransmitters at work.. thinking positive thoughts might help a little, just like trying to rest would help a little after ten venti espressos! Neurotransmitter chemicals direct much of your brain.

How to Recover

Obviously follow Doctor advice. I am not at all medically trained; have simply used Google to come to the conclusions posted here. Please don’t follow any advice I’ve mentioned without consulting with a Doctor.

Luckily since July 2013 mefloquine’s FDA mandated label states it can have lasting effects. Over the years I’ve seen at least six different doctors who stated lariam could not have caused my symptoms, but here try this SSRI..

SSRIs wipe me out – over the years I have had to leave several IT contracts because they zapped my energy and made me slow and dumb at times. If they work for you then great! They probably lower my IQ by twenty or thirty points and certainly take away most of the ‘fighting spirit’ that is essential to survive in my career.

Some supplements appear to work well (at least in these later years now symptoms are mild). B6 and B12 are ones I am close to 100% convinced have helped; B6 is scientifically known as a pre-cursor to GABA (i.e. the body uses B6 to manufacture GABA). B12 also appears to be helpful/necessary to GABA production. When first taking them it was easy to know if I had forgotten to take them – if I felt dizzy/ confused/ had numbness looking at the Sun->Mon pill containers would consistently show two days in a row still sitting there. Several month later missing days has no noticeable effect; medical literature states the body builds up longer-term stores of both. B vitamins appear to be safe to taken as supplements, but speak to a Doctor and do your own research.

Less Safe Supplements

Side effects are cyclic and even today can be uncomfortable, but my own gaps between episodes can now be very long. Phenibut is a legal supplement that stimulates GABA receptors; occasionally I take a little for a few days in a row. Please don’t just rush out and buy some, certainly don’t take it every day as tolerances build up. Again speak to a Doctor and do your own research about its pros and cons.

In these later years my symptoms are mainly mild peripheral neuropathy (numb hands/legs). Amazingly ibuprofen helps. This was mentioned in only two citeable sources I could locate, but amazingly can relive the symptoms within minutes. Ibuprofen Sodium is a newer version of ibuprofen still available OTC; apparently in a salt form it takes effect faster.

Lack of energy can also be an issue. Again this can be traced to a lack of GABA uptake. A ‘Super-B complex’ helped restore my energy (probably B6 and B12 based on my research finding since). It was close to a night and day change. Beyond B Vitamins there other supplements many consider as ‘miracle energy boosters’; you are free to do you own research on these, but please research side-effects too and do your own risk-benefit analysis. Some may scoff, but as a guide I used the number of overwhelmingly positive comments on sites like Amazon.com. Today ‘Jarrow Formulas Methyl B12′ has 466 reviews on Amazon averaging 4.5 stars. Second most helpful comment is titled: ”Cured His [Diabetic] Related Neuropathy and Mental Confusion”. It certainly assisted by with my neuorpoathy and confusion.


Again and again when researching symptoms Diabetes can up as a suspect. Personally I’ve tested as borderline twice over the six years, but after second blood work it was ruled out both time.

This year I bought a diabetes self-test kit and have tested blood sugar several times when experiencing lariam symptoms. It was fine. Those kits are under $20 and is an easy way to rule out Diabetes.

Tolerating the First Year

It is impossible to describe how unbearable the first year of lariam side-effects were. You have my sympathies. If you are on the worse-end of sufferers, then likely all you can do is reduce the symptoms. At least you can now find many people online who have come through it, so know there is light at the end of tunnel. Will the advice in post help? Maybe a little. Probably you need to swallow some pride and take serious prescription drugs for the first year or so.

Common Sense Advice

There is a lot of wishy-washy advice on in the Internet about how to cope with Lariam. Especially the Lariam Action site and its associated forum.

Sound advice is to avoid caffeine and larger quantities of alcohol. Caffeine is know to temporarily bind with and block GABA receptors! Interestingly since year one with lariam one small can of Red Bull has been the only caffeine I can drink without risking triggering lariam’s side effects – Red Bull contains taurine, B6 and B12; all have a positive effect on GABA.

Bizarrely I cannot tolerate decaffeinated products. Some appear to affect me worse than some caffeinated ones! Why? I have no idea. True caffeine free sodas are fine to drink. The best advice is to cut caffeine out 100%.

The Lies/ Legal Action

Doctors continually told us that Lariam could not be responsible. Yet in 2013 its generic equivalent warning label states it can cause the symptoms we complained about all these years.

Profit from Lariam likely diminished greatly since its patent expired and generic equivalent(s) became available. The US Military is a large purchaser of Lariam/ Mefloquine; this must have made Lariam very profitable when only one manufacturer could supply it. Only a few years after the profit motive was removed the black-box warning is approved.

Legal Action? Anyone suffering the effects of Lariam will not be capable of standing up to a court case; I would have been easy to confuse. The mere thought terrifying; recovery the goal of focus. If settlements ever start appearing please let me know. It has cost me hundreds of thousands of dollars in lost earnings.

Author: "dotnetworkaholic" Tags: "Uncategorized"
Send by mail Print  Save  Delicious 
Date: Tuesday, 23 Apr 2013 22:39

A friend just asked me what .Net groups are good these days. Atlanta is the Software Capital of the South which means we have many great groups in town and I watch them all for interesting topics, but these are the three I personally attend most often:

http://www.meetup.com/AtlAltDotNet/  this is great for new ideas and decent technical depth. It is a fairly new group still finding its feet

http://www.iasahome.org/web/atlanta  Atlanta’s IASA chapter – always has super-smart people in attendance. Most meetings end up being a discussion (or argument!) with few punches pulled. The best part? BS artists are shot down very quickly and most never come back :)

http://www.atldotnet.org this is the ‘main’ .Net User Group in town and excellent at delivering high level introductions to topics. Networking is very good here too as local MVPs etc are at most meetings

Other .Net focused groups are www.atlantamspros.com (now defunct) and  http://ggmug.com.

Hopefully that helps a few people looking to learn more and network :)

Author: "dotnetworkaholic" Tags: "Atlanta, Tech Events, Technology"
Send by mail Print  Save  Delicious 
Date: Saturday, 20 Apr 2013 14:56

TeamCity is great, I had it up and running with Automated Tests in just minutes. But.. attempting to make a Release Build generated the following error. Googling shows many people have the same error and I only found ugly solutions that required manual registry editing, copying of files etc:

c:WINDOWSMicrosoft.NETFrameworkv3.5Microsoft.Common.targets(2015, 9): error MSB3091: Task failed because “sgen.exe” was not found, or the correct Microsoft Windows SDK is not installed. The task is looking for “sgen.exe” in the “bin” subdirectory beneath the location specified in the InstallationFolder value of the registry key HKEY_LOCAL_MACHINESOFTWAREMicrosoftMicrosoft SDKsWindowsv6.0A. You may be able to solve the problem by doing one of the following: 1) Install the Microsoft Windows SDK for Windows Server 2008 and .NET Framework 3.5. 2) Install Visual Studio 2008. 3) Manually set the above registry key to the correct location. 4) Pass the correct location into the “ToolPath” parameter of the task.

The first thing I did was use msbuild from the command line; sure enough Debug Builds were fine and Release Builds had the same error. This ruled out Team City. Obviously installing VS2008 on our build box would solve the issue, but I assumed the build tools must available elsewhere without need all of VS2008. They are:

The solution is: Download the Windows SDK and install .Net Development Tools (it says 2008 Server but I did this on XP SP3):


Author: "dotnetworkaholic" Tags: "Development (General)"
Send by mail Print  Save  Delicious 
Date: Thursday, 26 Jan 2012 17:35

As promised here are the slides from that talk. Glad to hear people enjoyed it!

Author: "dotnetworkaholic" Tags: "Uncategorized"
Send by mail Print  Save  Delicious 
Date: Saturday, 30 Apr 2011 19:24

Over the years I’ve Googled and Googled for a solution to this. Heck I even tried Bing! Finally time + an urgent need permitted figuring out a simple solution. To filter down to one thread using Notepad++:

  • Copy the angle brackets and thread number to clipboard. E.g. “[12]”
  • Menu -> TextFX Viz -> Hide Lines without (Clipboard) text
  • Press Ctrl-A (Select all text)
  • Menu -> TextFX Viz -> Delete Invisible Selection
  • Press Ctrl-A (Select all text)
  • Menu -> TextFX Edit -> Delete Blank Lines
  • That’s it! You are now viewing one logging from only one Thread

This works very quickly even with 70,000+ line 10MByte log files. IMO it avoids the need for xml log4net logging and Chainsaw (or similar).Simpler is always better.

Demo: Two threads counting to 100:

Copy the angle brackets and thread number to clipboard. E.g. “[12]”

Menu -> TextFX Viz -> Hide Lines without (Clipboard) text

Press Ctrl-A (Select all text)

Menu -> TextFX Viz -> Delete Invisible Selection

Press Ctrl-A (Select all text)

Menu -> TextFX Edit -> Delete Blank Lines

That’s it! You are now viewing one logging from only one Thread

Author: "dotnetworkaholic" Tags: "Development (General), log4j, log4net, N..."
Send by mail Print  Save  Delicious 
Date: Monday, 28 Feb 2011 14:33

This was a dream bike build for me and lots of readers share the dream, so here goes:

Build or buy?
Building means you specify all the components. Even $5K bikes often come with pretty lame wheels, crank sets, cassette etc and you will not get to choose tires, seat, shifters, calipers, bars etc. It was a no-brainier to build, and I am happier with this ~$2,500 build up than most (all!) $4K stock bikes. Why the cost difference? I could hunt down discounted parts. E.g the shocks, wheels, and frame are not 2009 models.

If you are returning to mountain biking look in the ~$700 range for an off-the-shelf bike with front shocks and disc brakes, any less and you’ll not enjoy the sport. Getting semi-serious means $2K+ on a full suspension bike. Regulars I see again and again at local  mountain bikes trails are mostly riding $2,000-$3,500 machines with maybe 10-20% spending even more. The sticker shock takes a few weeks to get over. If you do buy off-the-shelf look at the specs to see if any parts are ‘custom’ . Custom generally means it is a non standard size, and replacement/ re-use on another bike will be hard or impossible. Another good reason to build. At the end of the day this sport is more about the rider, but go too cheap and parts will soon break costing you more in the long run.

Tools needed?
Quite lot of tools are needed, probably at least $400 worth. In rough priority order:
. Allen key set (~$10)
. Allen key sockets (~$15)
. Torque wrench (~$50)
. Cable/ Housing cutters (~$20)
. Bottom bracket tools (~$30)
. Cassette whip/ remover (~$20)
. Decent work stand (~$200)
. Headset press ($50->120)
. Crown race setter (I use a $3 tube from a DIY store)
. Star nut setter (~$10)
. Pipe cutter (~$20)
. Chain link splitter (~$10)
. White lithium grease (~$3)
. Several misc DIY tools you should already have

Where to start
Stripping and rebuilding your old bike is a great learning experience; it should you take a day or two. You’ll be annoyed how expensive new cables/ housings are, but will have a slicker bike at the end of it.  I bought an $89 Sette 16″ frame and rebuilt an old bike into one the correct size for my wife, it came in at ~22.5lbs which is amazing for an $89 frame and $300 shocks.

Hunting down the Dream Bike
Boy, this took me many hours online - scouring reviews, price trade-offs, forum postings etc. The most useful resource is the mtbr reviews forum – for each part I looked at the bad reviews, and also got a good idea of what everyone was buying for that component. This way I found some parts that I am really happy with, e.g. Odi Ruffian removable grips and the Thompson Elite seat post.

Search lots of bike sites as prices often differ widely, for each high value part use comparison shopping sites like Froogle – this found me XT cranks at over $100 off when everywhere else wanted MSRP.

Below I will try to discuss the trickier choices made; in general I tried to distinguish between parts I may upgrade and parts I should never need to replace on this bike. E.g. ‘cheap’ mechanical BB7 disc brakes vs the Thompson seat post. In some areas you probably want to check out lower-end parts before splashing out, I put an Manitou R7 fork on my wife’s bike and rode it for a while before justifying $800 Fox forks.

Parts Arrive
This bike comprised of eight or nine boxes of parts from five different suppliers. As I said there can be HUGE price differences between retailers, so the extra shipping costs are more than worth it.

BMC Trailfox 2.0 Frame under wraps
This is a unsold 2007 found for $499, MSRP for a 2009 model is $2,149

If this does not excite you then you are dead.

Avid BB7:My first disc brakes and they rock. No longer am I locking up/ skidding after being airborne + doing endos etc is very controllable. Be sure to use Avid speed dial levers so you can adjust the lever pressure required
Fox Talas 140 RLC: When downhilling it was immediately obvious that these work better than my Manitou R7 MRD and I loved the Manitou. The 100-120-140 adjustability is great, I run 120mm for XC and 140mm for DH/ light FR. 100mm feels plain weird on the BMC frame so stays unused. With hindsight the 150mm Talas model would have been a better choice, but these things were expensive as-is. If buying a Fox fork be aware that adjustable compression is only on the RLC models which adds a lot more to the base price. Add the QR15 option and you are close to $1000 for shocks alone!
Thomson Elite Seatpost: Brilliant! It has machined groves which stop the post slipping and must be about as light as the carbon EC70 post that slipped so often on my old bike


Mavic Crossmax ST:Good wheels are must, try finding these stock on anything less than $5K. These can run tubeless tires but as with hydraulic brakes they seemed an unnecessary complexity I did not want to deal with. Perhaps a future upgrade?

The Build Begins

Rule one: never clamp anything but the seat post, tubing on modern higher-end frames can be crushed otherwise. Install the seat post and mount in your stand.
Rule two: Weigh everything before it goes on the bike. You will want this info when shopping for upgrades.

Pressing in a headset: This BMC came with one pre-pressed but this is the tool you should use – a headset press. Almost all headsets these days are thread-less and press in, instead of screwing in like they used to. This is a much better system as headsets coming loose was too common in the 80s and 90s

Attach the cassette- needs a lot of torque and the special tool is called a cassette remover. This is a 9 speed cassette I had laying around, it will be a PG990 soon.

Attaching brake discs to wheels- use a torque wrench if possible. These are my first disc brakes so BB7 mechanicals were ideal; $46 each for 2009 models is a bargain. Rotor size was copied from a Sepcialized Enduro at 203mmF, 185mmR. With hindsight this is total overkill and they will soon be swapped for 160mm front and rear. Unless you weigh 250lbs and shuttle ski runs I cannot see 160mms overheating.

Pressing the crown race into place. That is a $3 piece of plastic piping which almost everyone uses – no other special tools are required

The other headset parts. This can be confusing for a beginner so make you download the relevant pdf. These are sealed bearings, don’t bother saving a few pennies on the old bearings-in-a-cage variety. Note: I missed the compression ring form this photo

Mouting the wheels getting ready to size the steerer; use something to prop it up. Put the headset parts, stem and spacers (max 30mm) in place.  Measure twice, cut once. Do not cut a steerer too short as the repair will be expensive.

Cutting the steerer- the pipe cutter is just a plumbers tool. Debur using the tool’s deburrer and then smooth off with a metal file

A star nut setter: Star fangled nuts are easy to bash it in with a hammer and I have done that before, but not on an $800 fork. The setter tool is only ~$10 so why not use one?

Depth of start nut varies on who you ask, from 4mm to 20mm. I set this one to 10mm. It should not matter since the star nut is only used to preload the headset bearings, the stem bolts are what holds the bike together.

Leave a gap from the top of the spacer to the stem since you are going to preload the bearings in a minute. The star nut bolt does not need to be very tight, just enough to stop play in the headset. I put the bike on the floor and tighten the steerer nut until steering becomes tight, then back off a little. At this point you should tighten the stem bolts

Disc brakes and Hollowtech cranks are next. Both being new to me it is time for lunch.  There is no point rushing and I want a clear head especially for that expensive crankset

Putting a caliper in place- very simple but you will need a mouting bracket that matches your frame and the rotor size. There are three different standards out there so do some homework on this or expect a trip to the local bike shop to find the correct mounting hardware. Avid brakes come with a cool auto-alignment system  (CPS) that works much like the curved washers from v-brakes. This saves having to face the brake mounts.

Hollowtech cranks:The scariest component. MSRP is $305 but all affordable cranksets semed to have bad reviews. Hollowtech means outboard sealed bearings and very few parts + look bulletproof. The black spacers are for use with a 68mm shell like I had – another plus, this a one-size-fits-all deal

Of course they needed a new tool. A torque wrech cannot be used with this tool so I used the torque wrench on my car’s wheelnuts to verify what 50Nm feels like. You do not want to strip BB threads!

This side requires very little torque and simply preloads the bearings. Tightening the crank on is done with two allen bolts at 90 degrees to the crank – they come with a stack of warnings so I doubt you’ll get it wrong

Derailleur Alignment Tool: I am not sure how accurate the tool is, but all three bikes I used it on had misaligned hangers. Misaligned in different ways so I assume the tool is correct. Use the tool to check Up, Down, Left and Right – lightly bending the hanger until it is within 4mm on any measurement. I have to say that my shifts are pretty smooth and maybe this helped?

The spoils of XTR components. The front mech comes with a helpful guide to set the correct distance from the chainrings. If you are buying XTR you can probably eyeball this measurement.

New SRAM chain. Note the removable link which means the chain can easily be removed in the future. The other tool is a chain link splitter which is only needed to remove links from the chain if it is too large.

Put the chain around both large cogs and allow for two extra links. I have ran with zero links before and had no isues – actually I might do the same on this bike to reduce chainslap on rutted downhills. With a tight chain running the extremes of gear combinations is the only issue, but that is a no-no anyway.

It is looking like a real bike now

Cables: … ugh, not hard but it takes a while. You must use a specific cable cutter or expect frayed cables. Housing and cables are different for both brakes and gears, also these days we run continuous housing from the levers to brakes – use tiny zip ties or special clips to attach the brake housing to the frame. Zip ties are my preference

Intially I am using an old set of bars/ levers but wanted to dump the old grips. Push a WD40 tube as far as you can, spray in oil and twist. These came off totally intact with little bother

I had never used these bars and they were incredibly wide. It was cool to see preset cut-down points. Again a few twists of a plumbers pipe cutter and deburring is all that is required

Removable grips:What an awsome idea for people that love swapping out components. My hands grip really, really well with these too – possibly the best part on the whole bike!

Six hours later it is complete (cables were tuned the next morning). It came in at 27lbs 8oz which for a ~$2,500 5″ travel all mountain bike is not bad at all. It should drop about a pound when replacing the old bars, shifters and oversized brake discs. Those tires are 2.6lbs too, yeah, yeah weight weenie.

The Results?

Time and money very well spent. The brakes in particular are changing my riding style, locking up is now almost a thing of the past, especially when scrubbing off speed after time in the air – you know when a corner is coming up immediately after landing. Having full suspension makes rides so much less fatiguing and landing misjudged takeoffs is much safer.

For a brand that is almost unknown in the US it attracts a lot of attention and questions – everyone so far seems to approve.


Author: "dotnetworkaholic" Tags: "Other"
Send by mail Print  Save  Delicious 
Date: Saturday, 26 Feb 2011 19:46

We have all seen the 80040154 COM error. Normally the solution is to run regsvr32 on the com dll so it is registered on the machine.

“System.Runtime.InteropServices.COMException (0×80040154): Retrieving the COM class factory for component with CLSID {D1CB0D81-7D2B-4064-9AC7-D0D88DEC3D16} failed due to the following error: 80040154.”

Of course regsvr32 was the first thing I did, but the error still happened in the ASP.Net MVC project on IIS7/ Server 2008. Using regedit.exe verified the dll was registered, so I ran the NUnit tests… same error! The solution is to permit the IIS7 App Pool to run 32 bit code as shown below:

IIS7 COM Gotcha

IIS7 COM Gotcha

Author: "dotnetworkaholic" Tags: "ASP.Net MVC"
Send by mail Print  Save  Delicious 
Date: Saturday, 26 Feb 2011 19:46

TurboTax has been wonderful over the years but the 2008 price totally made me baulk. An eye popping $229.80 this year for my needs:

$49.95 Premier inc Rental Properties + $34.95 Per State
$109.95 Business including S Corporations

H&R Block Home & Business 2008 promises the same functionality for $67.96 (Google for a discount coupon code). This was enough of a saving to try it, worst case I waste a few hours and buy TurboTax.

Since the IRS have not finalized rules for 2008 a few areas are not yet available including what seemed to be almost all the s-corp filing. Still I entered our W2s, a dummy 1099 and all the details on my rental properties with no issue. Everything was as easy as it was with TurboTax. There are still all the detailed sections to ensure you are aware of what can be deducted without reading 17,000 pages of IRS publications or trying to get a comptent accountant to talk to you. As for problems so far,  non are worth mentioning being just very minor niggles. Before buying I read online that H&R does not even let one print PDFs – well I just saved my 2008 draft with no problem (File -> Save As PDF…).

Here are a few samples screens, please don’t laugh at the repairs on the rental – it has barely an issue for five years and last year seemingly everything broke:


Looks a lot like TurboTax right? Go ahead save the money, you’ll feel amost right at home



As with Turbo Tax this Rental screen  maps straight to the tax form field – ah this is soo simple :)

Author: "dotnetworkaholic" Tags: "Other"
Send by mail Print  Save  Delicious 
Date: Saturday, 26 Feb 2011 19:46

This was so easy I felt obliged to blog it. Under our house rule ‘not used it in a year, so it has to go’, our old P4 laptop is being donated. It held old tax records, Microsoft Money files etc which had to be deleted first.

One free and simple technique is Boot and Nuke. I download the small ISO and used this free ISO burning tool to burn a CD.

The laptop then booted from the CD into a Linux program which looks similar to below once running. I choose the default and it took about two hours to delete a 70GB 7400 rpm hard disk with a DOD Short (three pass) technique:


Finally another 30 minutes using Averatec’s media recovery discs and it is now ready for someone else to enjoy.

Author: "dotnetworkaholic" Tags: "Technology"
Send by mail Print  Save  Delicious 
Date: Tuesday, 17 Mar 2009 17:11

>> Click here to download the slides/ demos <<

Thanks to all who came and apologies to those who came and found standing room only – I intend to record these demos and host them soon, watch this space.

The demos are fully RC2 compatible now, that day was a close call to a non-working demo.

Author: "dotnetworkaholic" Tags: "ASP.Net MVC, Atlanta, Presentations"
Send by mail Print  Save  Delicious 
Date: Tuesday, 27 Jan 2009 18:45

Update: The Release Candidate has been released. The demo below works fine with it too.

>> Click here to download the slides and second demo. <<

Wow! We had a fantastic turn-out last night, interest in MVC must high!  Not having presented for almost a year  I tried not to publicize the the talk; but still instead of the expected twenty to thirty people there must have been close to a hundred and we had to expand the room! Couple this with the Microsoft building now almost impossible to find with that road closure flummoxing us coming from the south-side – who knows how many found Old Roswell gone and went back home at that point. For those that missed last night this talk will be repeated at all local code camps that happen in the next few months, the Atlanta one is in March and 2009 details will soon appear at http://www.atlantacodecamp.com/.

Obviously that presentation covered only some of the very basics and there is a lot more to working with MVC. Watch this space for more MVC tips and if people request it, I’ll put together a deeper presentation for Code Camps etc.

Also, I was not kidding about the free presentation/ training at your company. Totally free, no creepy sales people will call afterwards either – it will keep my hand in while taking a few months out from real work and with luck one of them might lead to my next gig (which will use MVC).  So if you have at three or more devs drop me a line via the ‘Email Me’ page on this blog – don’t worry if you are a small or large shop, I just want to spread the word and try to ensure there are a selection of MVC Contracts out there when I get bored of goofing off :)

Author: "dotnetworkaholic" Tags: "ASP.Net MVC, Presentations"
Send by mail Print  Save  Delicious 
Date: Sunday, 07 Dec 2008 23:35

This is a plea to anyone who blogs, writes or talks about ASP.Net MVC. This stuff is pretty simple, let’s not scare people away.

In the last four weeks of learning MVC I have:

All are guilty of requiring niche knowledge in some or all of:

  • Domain Drive Design
  • The Repository Pattern
  • JSON, REST etc
  • Automated Developer Testing
  • Mocking Frameworks
  • Active Record, NHibernate, LINQ to SQL - insert any other non mainstream data access technique here
  • Differences between MVC, MVP, Front Controller etc
  • Lamda expressions

Apparently every educator is trying to showcase their knowledge in a manner that is inaccessible to 95% of developers.  The presenters I saw at NYC are great guys, but almost all the faces in the audience soon were blank as decks became PhD thesis material. We soon dwindled down to ~50% attendance. I chatted with several ‘regular guys’ over lunch and they were pretty annoyed - what they need to see are simple samples.

Missing Samples: Hopefully the Manning book will improve before publication - the first chapter got me going with the framework, but the rest was almost useless to me. People who care about DDD, MVC details etc already know about them. What we need to see is samples of Grid controls, binding data to controls, how we get data back from a postback etc. At work we soon had screens working with jQuery and ext; perhaps because we do not worry about DDD, strict patterns etc?

Of course there is the argument that MVC is not targeted to all developers and only super-intelligent ones will understand MVC. Poppycock, RoR has been a huge success due to its simplicity - how many RoR developers know they are using Active Record? Not many I would gamble; most RoR people just want to build something quick and don’t have a CS background. MVC is not that hard, let’s present it clearly and simply guys.
Oh yes, and I do think the ASP.Net Framework is heading in the right direction. There again I love NUnit and wrote my own MVC framework for WebForms back in 2003 as I did not know how else to test my ASP.Net screens.

Author: "Paul Lockwood" Tags: "ASP.Net MVC"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader