• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)

Date: Friday, 07 Mar 2014 23:15

With the release of Visual Studio 2013 one of the very cool new things that might easily go unnoticed is that new templates that have been provided for Window Store DirectX applications. Templates are an incredibly important thing to get right. For new developers or for veteran developers looking to investigate new technology they provide a starting point to investigate, a best practices guide, as well as expansion points where they can be developed into different applications. With Visual Studio templates being the base for all different sorts of customer applications real care has to be put into how they are constructed. They need to demonstrate important new concepts, provide a broad base to be expanded into different application types, be extremely well organized, and on top of that they need to concise enough for easy comprehension.

To get to know these new templates better I’m going to take a post to tour around the new templates to introduce the overall architecture, point out a few interesting areas, and note the points where we expect users to replace or expand on the existing template code.

Getting Started

These new DirectX templates are included with the Professional, Premium and Ultimate editions of Visual Studio 2013 as well as the Visual Studio Express for Windows edition. If you want to start for free the Express Edition for Windows on Windows 8.1 is the place to start.

The new templates can be found after selecting New Project in the menus under the Visual C++ node in the Windows Store section. In that section there are two DirectX templates, one labeled “DirectX App” and one label “DirectX App (XAML).” The DirectX App template interfaces directly with CoreWindow and is the best place to start if your application is expected to have DirectX content always be full screen and not use things like the Windows Store app bar or show any other controls aside from the DirectX content. The DirectX App (XAML) template is a bit more complex, it hosts the DirectX content in a SwapChainPanel which allows the DirectX content to be placed into XAML controls as part of a bigger UI. Even if you intend on having your DirectX content always be fullscreen the XAML template allows you to use stuff like the AppBar for applications settings. In general, the XAML template is similar to the basic CoreWindow template with some interesting added complexity, so I’ll be using it as the base for this blog post.

Before looking at the code it’s best to just go ahead and compile and run the application so that you can see what the resulting output is going to be. What you should get is a 3D rotating cube with a framerate counter in the lower right and a small “hello” message in the upper right. If you use your mouse you can click and drag back and forth over the cube to rotate it around.


As a side note, the counters in the upper left with the white text are XAML specific framerate counters that help to show the current speed that your XAML content is rendering at. They are not specifically related to this template and you can turn them off by looking for the line “DebugSettings->EnableFrameRateCounter = true” and setting it to false. Just make sure that you don’t get them confused with the framerate counter in the lower right, which is a part of the code of this template and is showing you the current rendering speed of your DirectX content.

Template Tour

Before digging in too deep into any one specific area it’s best to take a quick high level look at the architecture of a freshly created DirectX (XAML) project, named MyApp for this example. Please forgive the informality of the diagram below, as it’s intended as just a quick overview and not any type of rigorous UML diagram.

Fig 1 – Architecture Overview


There are a few more small parts of this template, but the picture above captures the six main pieces of concern. Part of the design of the templates is to help make the correct break points to create separation between the various application components. In particular, care has been taken to break the update and render application engine in MyAppMain from the Window Store specific input / output code in the DirectXPage. Also, the renderers are created to be both independent of each other as well as keeping low level rendering code out of the MyAppMain engine so that they can easily be replaced or upgraded without messing with the main engine code.

Part by Part

Keeping the above graph in mind the template parts can be dug into with some more detail, both in what they provide as well as how they relate to each other.

App (XAML)

The App class does little but provide an entry point for application launching as well as handling events for when Windows suspends or resumes the application. When the application is launched, the App class creates an instance of DirectXPage and assigns it to the content of the current window. From then on DirectXPage handles most of the interactions (aside from the aforementioned suspend and resume events) with the user and with Windows.

DirectXPage (XAML)

Before getting into the DirectXPage code, it’s best to first familiarize yourself with the concepts of swap chains if you have not already. There is an excellent intro to them here, of which the basic summary is that swap chains represent both the front buffer of pixels current being displayed to the user as well as one (or more) back buffers. To keep up a smooth appearance to the user DirectX draws to the back buffers and then is able to instantly swap them with the front buffers when they are done drawing.

The two main controls of interest in the XAML file for the DirectXPage class are the SwapChainPanel and the AppBar class. If you have programmed for the Windows Store before the AppBar will be familiar, it’s a quick and easy place to place extra control and configuration buttons that you want to be hidden out of the way most of the time. The SwapChainPanel is a new control for Windows 8.1. Previously in Windows 8 the control to use for DirectX / XAML interop was the SwapChainBackgroundPanel. This control allowed for interop, but had a multitude of restrictions on how you could use it with XAML. For some examples of these limitations: you had to place it as the root XAML element, there could only be one SwapChain per window, and projection and transformation APIs would have no effect on its rendering. While SwapChainBackgroundPanel is still available to use it’s basically a Windows 8 legacy control at this point, the SwapChainPanel is far more flexible with its ability to be composed anywhere in to XAML layout, be resized and transformed, and to have multiple SwapChainPanels on the same page.

The SwapChainPanel provides an XAML target for DirectX swap chains so we need to tie that control together with the DirectX device that will actually draw our 3D content into that SwapChainPanel. The DeviceResources class, which I’ll detail later, is the class that sets up and manages the DirectX device, so in the constructor of the DirectXPage the DeviceResources class is created and the SwapChainPanel is set as its output target. As the final step of construction DirectXPage creates our MyAppMain (consider this our XAML independent game / app engine class) and passes in the DeviceResources class that we created.

// At this point we have access to the device. 
// We can create the device-dependent resources.
m_deviceResources = std::make_shared<DX::DeviceResources>();

m_main = std::unique_ptr<MyAppMain>(new MyAppMain(m_deviceResources));

After initialization the DirectXPage stays around looking for two main things: XAML composition changes and independent input. XAML composition changes are things like DPI changing, SwapChainPanel size changing, and orientation changes. When these events come in the DirectXPage updates the DeviceResources so that the DirectX device and device context can change for the new output parameters and also notifies the MyAppMain class so that the engine and any associated renders can also update or load new size dependent resources.

Independent input is a new concept that goes along with the SwapChainPanel and SwapChainBackgroundPanel controls. When you use independent input events come in and are processed on a background thread, this allows the engine to decouple from having input come in at the framerate that the XAML controls are running at. This rate can be different from your DX content in the SwapChainPanel, so your game is going to feel slow and laggy if your input is updating at 15fps along with XAML and your game is actually running at 60fps. In the code section below you can see the independent input source getting set up to run in a background thread and hooking up to a few pointer events which allow you to drag and rotate the cube back and forth. For another solid sample that uses independent input the Windows SDK provides an inking sample that uses independent input to maintain smooth performance while capturing ink stroke gestures on the screen.

// Register our SwapChainPanel to get independent input pointer events
auto workItemHandler = ref new WorkItemHandler([this] (IAsyncAction ^)
	// The CoreIndependentInputSource will raise pointer events for the specified device types on whichever thread it's created on.
	m_coreInput = swapChainPanel->CreateCoreIndependentInputSource(
		Windows::UI::Core::CoreInputDeviceTypes::Mouse |
		Windows::UI::Core::CoreInputDeviceTypes::Touch |

	// Register for pointer events, which will be raised on the background thread.
	m_coreInput->PointerPressed += ref new TypedEventHandler<Object^, PointerEventArgs^>(this, &DirectXPage::OnPointerPressed);
	m_coreInput->PointerMoved += ref new TypedEventHandler<Object^, PointerEventArgs^>(this, &DirectXPage::OnPointerMoved);
	m_coreInput->PointerReleased += ref new TypedEventHandler<Object^, PointerEventArgs^>(this, &DirectXPage::OnPointerReleased);

	// Begin processing input messages as they're delivered.

// Run task on a dedicated high priority background thread.
m_inputLoopWorker = ThreadPool::RunAsync(workItemHandler, WorkItemPriority::High, WorkItemOptions::TimeSliced);


MyAppMain is where we encounter the standard game development update render loop that is the heart of most games. As timing is of the upmost importance in real time graphics applications our DirectX templates contain a StepTimer class (covered in the next section) which is used to control rendering loop timing. Like with our independent input events the rendering loop will be run on a background thread. Keeping it on a background thread prevents the possibly expensive update and rendering functions from dragging down our UI responsiveness if they start to fall behind.

// Create a task that will be run on a background thread.
auto workItemHandler = ref new WorkItemHandler([this](IAsyncAction ^ action)
	// Calculate the updated frame and render once per vertical blanking interval.
	while (action->Status == AsyncStatus::Started)
		critical_section::scoped_lock lock(m_criticalSection);
		if (Render())

// Run task on a dedicated high priority background thread.
m_renderLoopWorker = ThreadPool::RunAsync(workItemHandler, WorkItemPriority::High, WorkItemOptions::TimeSliced);

The work of the update function is quite simple. It first processes any per-frame input data that needs to be processed and then allows each renderer to perform their per-frame updates. The processing input step here is due to the independent input events coming in untethered from the visual framerate from the DirectX page. These event could be coming in much faster than the framerate, so instead of directly passing them to the renderers and forcing them to update much faster than we need to we store the state results of the input events in the engine, then we pass in that current engine state to the renderers to perform their per-frame updates.

// Updates the application state once per frame.
void MyAppMain::Update() 

	// Update scene objects.
		// TODO: Replace this with your app's content update functions.

The Render function act similarly, only in this case it first gets the device context from the DeviceResources class, clears and sets the viewport, and then asks the renderers to perform their render functions.

// Renders the current frame according to the current application state.
// Returns true if the frame was rendered and is ready to be displayed.
bool MyAppMain::Render() 
	// Don't try to render anything before the first Update.
	if (m_timer.GetFrameCount() == 0)
		return false;

	auto context = m_deviceResources->GetD3DDeviceContext();

	// Reset the viewport to target the whole screen.
	auto viewport = m_deviceResources->GetScreenViewport();
	context->RSSetViewports(1, &viewport);

	// Reset render targets to the screen.
	ID3D11RenderTargetView *const targets[1] = { m_deviceResources->GetBackBufferRenderTargetView() };
	context->OMSetRenderTargets(1, targets, m_deviceResources->GetDepthStencilView());

	// Clear the back buffer and depth stencil view.
	context->ClearRenderTargetView(m_deviceResources->GetBackBufferRenderTargetView(), DirectX::Colors::CornflowerBlue);
	context->ClearDepthStencilView(m_deviceResources->GetDepthStencilView(), D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);

	// Render the scene objects.
	// TODO: Replace this with your app's content rendering functions.

	return true;


The StepTimer was not explicitly called out in the architecture diagram as it’s a smaller helper class, but as timing is the heart of any game loop it’s a very important component to understand. The timer provided with the templates was designed to cover a broad set of common features that are desired in a timer while still being concise and easy to understand. Before covering more of the StepTimer class it’s best to look back up the page just a little bit to the MyAppMain::Update function, where you can see the m_Timer.Tick function being called (this is the main point of usage of the StepTimer class).

The StepTimer is created during initialization of the MyAppMain class. By default it is created with a variable timestep mode. In this mode in the Tick function the StepTimer class will calculate the new amount of ticks elapsed since the last call to Tick and then without any checks directly call into the update function and then exit back out of Tick. Give the usage of the Tick function in MyAppMain::Update we can see that in this mode the logic updating of the Tick function is tied one to one with the Render() function of MyAppMain which is called after it. So in variable timestep mode the update and render functions will always be called as fast as possible and will always be called one after the other, with one update function called each time before each render function. Both of these will be called as fast as possible, with the framerate limited only by speed of update logic and speed of rendering.

At the initializing point of the StepTimer there is a bit of commented out code that illustrates the use of the fixed step mode of the timer.

// TODO: Change the timer settings if you want something other than the default variable timestep mode.
// e.g. for 60 FPS fixed timestep update logic, call:
m_timer.SetTargetElapsedSeconds(1.0 / 60);

 This fixed step mode of the timer changes how the Tick function handles updates. If the above code is uncommented the calling of the update action from the Tick function is guaranteed a specific amount of times (and only that amount) per time period.

if (m_isFixedTimeStep)
	<extra code snipped>
m_leftOverTicks += timeDelta;

while (m_leftOverTicks >= m_targetElapsedTicks)
m_elapsedTicks = m_targetElapsedTicks;
m_totalTicks += m_targetElapsedTicks;
m_leftOverTicks -= m_targetElapsedTicks;


 In the above code if our time elapsed since the last call to update is not yet past our target amount we’ll add in the delta amount and exit out of the function without calling the update function. And if the amount of time is past the target amount we’ll call the update function as many times as needed based on the amount of time we are over the target interval. As such, with fixed time step we may call Render before running the update function or we might call the update function several times consecutively to catch up (if we are running slower than the target framerate) before calling the Render function once. It’s common for games to use either fixed or variable time step bases on other considerations on the engine such as how animations are handled or if the game is highly timing sensitive so both modes have been provided with the StepTimer.


Before examining the DeviceResources class it’s best to be familiar with the DirectX 11 concepts of both devices and device contexts (more on creation of both here). The short answer if you don’t want to read up on those links is that a DirectX device represents the resources and capabilities tied to a display adapter while the device context controls the pipeline of how that device is used including stuff like setting data into buffers and loading shaders. Having at least a basic understanding of this is important as the DeviceResources class does much of the work for the template in managing and altering the device and device context.

As seen by the initial diagram, the DeviceResources class is used pervasively throughout the template. The DirectXPage, MyAppMain, and individual renderers all keep references to this class. What it contains is much of the heavy lifting required to create DirectX content to correctly output into the SwapChainPanel as well as providing all the DirectX setup needed for the renderers to do their work. Our hope with this class is that it will provide template users with a very good starting place for most types of games and applications and that by default they won’t have to edit it much for their specific applications. Still it’s always best to have a solid idea of what is going on under the covers in this class.

When created by the DirectXPage the DeviceResources class creates all the device independent resources as well as creating the initial device resources. The device independent resources are resources that will be held by DeviceResources for the life of the program and will not need to be recreated. On the other hand, device resource are resources that are tied to the DirectX device, so if the device is lost or needs to be recreated we have to recreate these resources. And finally, after construction of the DeviceResources class the DirectXPage passes in the target SwapChainPanel. At this point DeviceResources grabs window specific properties (orientation, dpi, logicalsize) and creates window size dependent resources. To keep these three classes of resources separate here is a table showing some examples of each type.

Device Independent Resources

Device Resources

Window Size Dependent Resources











While all of the various resources are all worth knowing more about the important part here is there are the three categories of device independent, device, and window size dependent resources. The main function of the device resources class is to keep track of all these resources and to re-create them when needed. For an example, when the orientation of your device is changed the DirectXPage class gets an orientation changed event and passes it to the following function in DeviceResources.

// This method is called in the event handler for the OrientationChanged event.
void DX::DeviceResources::SetCurrentOrientation(DisplayOrientations currentOrientation)
	if (m_currentOrientation != currentOrientation)
		m_currentOrientation = currentOrientation;

 The orientation state is updated and the window size dependent resources are recreated.


The key point of the DirectX template renderers is their independence from each other and from the high level XAML and app code. The renderers main interactions are with the Update and Render functions called from the MyAppMain class. On update the renderers need to change any state for the current frame step. And on render they grab the D3DDeviceContext from the DeviceResources and draw all the text or primitives that they are responsible for. Because the DeviceResources class has taken care of all the swap chain sizing, orientation, dpi settings and such the renderers are free to be much smaller and cleaner and focused just on drawing their specific scene.

Just like DeviceResources, renderers can have device and window size dependent resources contained in them. In the Sample3DSceneRenderer::CreateDeviceDependentResources function the pixel and vertex shaders are loaded in and the mesh of the cube that we are going to draw is created. This function also provides a nice example of C++ Windows Store task continuation if you are interested in that.

void Sample3DSceneRenderer::CreateDeviceDependentResources()
	// Load shaders asynchronously.
	auto loadVSTask = DX::ReadDataAsync(L"SampleVertexShader.cso");
	auto loadPSTask = DX::ReadDataAsync(L"SamplePixelShader.cso");

	// After the vertex shader file is loaded, create the shader and input layout.
	auto createVSTask = loadVSTask.then([this](const std::vector<byte>& fileData) {

		static const D3D11_INPUT_ELEMENT_DESC vertexDesc [] =
			{ "COLOR", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },

<more loading code snipped here>

 As this post is mainly concerned with the structure and use of the templates I’m not going to cover all of the drawing code included in these template files, as that topic is more of a general DirectX programming topic. But if you want to poke around in in the Sample3DSceneRenderer you can see the data being loaded in CreateDeviceDependentResources, the projection and view matrices being set up in CreateWindowSizeDependentResources, the model matrix being set via the Update function, and finally the vertex and index buffers being set and the primatives being drawn in the Render function. All of which makes perfect sense in the way that we’ve constructed these templates as the shader files and mesh data never needs to be updated, the view perspective needs to be updated when the window size or orientation changes, model matrix rotates the cube and thus is needed in when the engine updates, and the Render data needs to update once per frame drawn.

Where to modify these templates?

After getting a grip on the overall layout of the templates and the important classes within the next step is to start to expand them out for your particular game or application. If you search the solution for “todo:” you’ll get seven results in the MyAppMain class where template consumers are expected to start plugging in their initialization, state, input handling, and connections to custom renderers. Then by replacing out the template renderers with your own custom renders you can start to expand out on your own application with a very solid base already provided by the template update / render loop and the DeviceResources class’s handling of the XAML / DirectX interop connection. 


Author: "IanHuff - MSFT" Tags: "visual studio, debugger, VS, template, g..."
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 05 Apr 2012 22:10

IntelliTrace Web Requests

Moving forward the Visual Studio team is committed to bringing new updates to Visual Studio on a much faster cadence than the usual release and service pack cycle. As part of our run up to this commitment we’ve used the Visual Studio 11 Beta period to add some new features to IntelliTrace that were not ready for the Beta release date. By downloading an update to Visual Studio 11 Beta, which should be available via an update from the VS UI this week, some powerful new IntelliTrace functionality will be added for helping to diagnose issues in production IIS web applications. This new functionality helps to capture the start and end of each web request hitting the server, collects and records information from each request, and shows which IntelliTrace events are correlated with the execution of each web request.

Collecting web request information on a server

Visual Studio 11 Beta already includes the means by which you can collect IntelliTrace web request information on a production server. However, there is no way to actually view the collected web request information with a non-updated VS11 Beta, the collected files can be opened, but no web request UI controls will be available to the user. The Beta update adds the new web requests UI and opens up the full potential of these IIS server collected IntelliTrace files.

This article is focused on how to use the new web request UI to analyze data, so the details of collecting the IntelliTrace logs are best left to our in-depth MSDN article on the topic. Note that this collection work makes use of our new standalone collection bits and PowerShell scripts, so that files can be quickly and easily collected through PowerShell while just having to copy over a few files to the server (no installation required).

Analyzing web request information

Opening an IntelliTrace file collected with web request information in Visual Studio 11 Ultimate after installing the Beta update will add a new web requests section to the IntelliTrace summary page. Expanding this web requests section will show a table like the one below. The example file below was collected on a Team Foundation Server instance that we use locally for source control.

This table is a listing of all the web requests that IntelliTrace recorded while collecting data on the server. For each request to the server we capture some basic information on the request: Method, Target URL, Time Taken, Status, Session Id, Client IP, User Agent, Start Time and End Time. Most of these categories are self-explanatory with the exception of Session Id. This value for Session Id is not related to ASP.net’s concepts of SessionID but is simply an increasing integer that IntelliTrace keeps internally and increments whenever a new session is started. So by looking at just web requests that have the same Session Id value in this view you will be seeing all web request for a specific user session, but you cannot match this value against the SessionID value from ASP.net Session State. By default the web request are sorted in the order in which they hit the server.

The search grid at the top of this control provides both plain text search, which searches all but the time columns, and some basic column filtering syntax. This column filtering is done by specifying the column name with no spaces, a colon, and then a filter value. You can do one of these filters per filterable column; the columns which are filterable are listed in the tooltip of the searchbox.

As an example, the following query will pull all of the web requests that returned a 304 and that contain the text “App_Themes.”

Status:304 App_Themes

Or here are all of the web requests that hit the server for the second recorded user session:

Closer analysis on a web request

From the summary page listing you can dig deeper into a web request by double clicking it in the list or selecting it and clicking the “Request Details” button at the bottom of the list. A new window will be opened as a preview tab, if you are not familiar with them yet (as they are new in VS11) preview tabs will continue to open in the same document tab unless you promote them to the main document well. As you open details on more web requests they will close the previously opened details page unless it was promoted from out of the preview tab. Currently if you have a very large file full of web requests opening this details page can take a little bit of time, this performance is being worked on, but expect a bit of a wait for now.

The top of the request details page has the same basic request information that were surfaced on the main summary page. Below that listing is list of IntelliTrace events that looks similar to the list that you would see while live debugging with IntelliTrace or actively debugging a recorded IntelliTrace log. This list shows all of the IntelliTrace events that were recorded while executing this specific web request on the server. They are listed in the order that they occurred with the first event starting at the top of the list.

In the example events list above you can see a bunch of ADO.net events pulling data from a database as well as a couple of possibly problematic exceptions being thrown (see below).


By double clicking on any of these events the debugger will start and begin debugging the IntelliTrace log at that specific event. If you have collected your log with method calls information, using the collection_plan.ASP.NET.trace.xml or by customizing your collection plan, you can then step around in code from that location and examine method parameters and return values. One important caveat to note here is that being able to view source when debugging a recorded log requires that the symbol (PDB) files for the version of your application that was running on the IIS server need to be available in the Visual Studio symbol path. Without these symbols, resolving to source locations is impossible and only the “symbols not found” window will be shown when debugging.

How it all works

After this quick little introduction to this new feature I’m next going to be digging in a bit more into how the internals of this web request feature works from the IntelliTrace side, so keep an eye out here for another blog entry soon on that topic.

Ian Huff

Author: "IanHuff - MSFT" Tags: "visual studio, beta, microsoft, intellit..."
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 09 Dec 2010 16:31

Not much new stuff to write about, as our next version plans are still being kept pretty close to our chest. But I did want to mention that the Beta for Visual Studio 2010 is out now and includes support for both 64-bit IntelliTrace and IntelliTrace for Sharepoint applications. Both big items that I know that we've heard from customers about plenty of times.


Author: "ianhu" Tags: "beta, 2010, intellitrace, 64-bit, SP1, s..."
Comments Send by mail Print  Save  Delicious 
Date: Monday, 02 Aug 2010 16:34

I got a chance recently to write an article for Visual Studio Magazine providing an introduction to debugging with IntelliTrace. It just came out in the August print issue and the e-copy was just put up online here over the weekend. Check it out if you are looking for a basic introduction to IntelliTrace functionality and to learn some of the ways that IntelliTrace can help to speed up your debugging.

Overall the article writing process was fun, but not something that I’d want to do too often. All the editing and whatnot was just a little too time consuming for my tastes. I like just getting stuff done and getting it out there.


Author: "ianhu" Tags: "visual studio, developer, intellitrace, ..."
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 22 Jul 2010 21:25

Sorry for the lack of posts for a while, we’ve had a few moderate changes in our development teams here in Visual Studio after shipping Visual Studio 2010. As it all shakes out, I’ve been moved from an IntelliTrace specific team to a Diagnostics UX team tasked with the user experience for the debugger, IntelliTrace, the Visual Studio Profiler and Static Analysis tools. We’ve also merged the diagnostics section with the Team Arch tools into a new Visual Studio Ultimate product unit.

For this blog I plan on keeping the focus on IntelliTrace (and that is what I mainly expect to be working on) but expect to see more info on the Profiler, Debugger, Static Analysis and Architecture tools scattered in here as well.

Currently I’m just finishing up on a project to improve our internal regression prevention system. Getting some experience with a Silverlight + RIA Services multitier app was fun, but the confusion that comes with trying to bring together a disparate team to accomplish a task and then disband was a little crazy at times. With that project wrapping up I’ll be back on the main product line and able to add some more IntelliTrace blog entries coming up soon.


Author: "ianhu" Tags: "Microsoft General, visual studio"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 25 May 2010 16:39

IntelliTrace Links

Currently our team is deep in the throes of MQ (Milestone Quality, essentially a chance to catch up on improving our own tooling, automation and build scenarios after a big release) so I’ve been a little short on time to post new blog articles. However I did have a few quick links to share out. First up is Chris Koenig’s how-to video for using IntelliTrace in Visual Studio 2010. This video provides a solid introduction to the basics of IntelliTrace debugging.


I also wanted to link in some of the excellent walkthroughs created by our MSDN team.

Debugging a card game in IntelliTrace Part 1

Debugging a card game in IntelliTrace Part 2

Debugging a Web Site with IntelliTrace Part 1

Debugging a Web Site with IntelliTrace Part 2

~Ian Huff

Author: "ianhu" Tags: "visual studio, intellitrace, ultimate, w..."
Comments Send by mail Print  Save  Delicious 
Date: Friday, 07 May 2010 17:02

A rare non-IntelliTrace focused blog entry for me. But I really don’t see much talk from customers about pinnable datatips and since these little buggers are one of my favorite new additions to the Visual Studio 2010 debugger I figured they could use a little love.

In Visual Studio 2010 we’ve done some major updating to the user interface, bringing most parts of the shell over into the WPF world from earlier technologies. This update was pretty painful for us internal teams that integrate heavily with the shell, but the end result was worth it as we’re able to do some really cool little UI things much easier than it would have been trying to do them with the old shell. Pinnable datatips are one of those nice little new additions that the shiny new shell affords us.

To check out pinnable datatips all you have to do is debug into a project, stop at a break point and mouse over a variable to bring up the datatip. When the datatip pops up you’ll notice a new little pin icon that appears at the end of each item’s row.


In the example above I want to examine the AvgValue property on each result as I move though the iteration loop. So after clicking the pin button next to the AvgValue property I’ll be left with a little control showing just that properly as shown below.


Note that the three little controls to the right of the pinnable datatip will only appear when I mouse over the data tip. If I’m not over it I’ll just see the tip and the value. This tip is now pinned into place over that specific line of source code and will stay over that line so you’ll only see it if that specific line is on screen. If I run to the next iteration of the loop and the value changes the text will be in red to draw my attention to the change (just like in the watch window).


The top button of the right side hover buttons will delete the pinnable datatip, just as one would hopefully expect from an icon with a big X on it. The second button will unpin that datatip from the specific source line and allow you to drag it around to where ever you want on the screen. When the tip is unpinned it will then stay in that location on screen as opposed to scrolling off with the source line (see pictures below). This mode can be seen more as a post it note stuck to a screen (hence the yellowish background color in this mode), as opposed to the pinned version, which is more of an annotation on a specific source line. If you want to repin the datatip just click the pin button again and the datatip will latch on to the closest source line (note, not the original source line that it was pinned to). This lets you group and position all your pinnable datatips either relative to the screen or relative to any handy source line. Note that in the picture below you can see the outline box of where I'm dragging the tip to, with the mouse cursor hidden by PrintScreen it's a little hard to tell that I'm dragging the tip to a new location.


The final button expands the post-it note concept further by allowing you to type in a little comment to associate with that specific pinned data tip. This note can be anything that you want to help with debugging. I know that in particularly hairy debugging sessions in previous releases this feature would have saved me lots of time just by helping to organize exactly why I was watching a specific variable at some point. In general I would take debugging notes like that on a sheet of paper, but now that I have the functionality to keep them in the debugger right where I’m working I find that I’m increasingly using notes on pinnable datatips as opposed to pencil and paper.


It’s very much one of my favorite new debugger features in VS 2010 and something that I think all VS developers could get a lot of use out of.

As a final note I realized that I was mostly working with properties above. Pinnable datatips can also pin types. These pinned types can be expanded in the same way that you would expand and examine a normal datatip. In the example below I’m watching a few properties and I’ve got the result object pinned to examine some properties off of.


~Ian Huff


Author: "ianhu" Tags: "visual studio, 2010, debugger, datatip, ..."
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 22 Apr 2010 15:53

From lots (and I do mean lots) of customer tweets and comments we know that folks are unhappy with IntelliTrace not working with 64-bit applications. We knew that this was going to be an issue for many customers, but due to time constraints we just weren’t able to get all the kinks in 64-bit F5 ironed out in time to launch with a high quality bar.

While we are working on getting this scenario fixed for a future date I would like to mention that there is actually is a way to collect IntelliTrace data on 64-bit applications with just what ships with VS 2010 Ultimate. Some time ago I posted a blog article in which I talked about how to collect IntelliTrace data with Microsoft Test Manager, the new standalone test data collection application that ships with VS 2010 Ultimate. Because the launch patterns for our MTM IntelliTrace data collector and for F5 launch from Visual Studio are different we were actually able to get 64-bit working in the MTM scenario. Just follow the steps in my listed blog article and you will be able to capture IntelliTrace data for 64-bit applications, no special tweaks or changes to the workflow are needed from 32-bit applications in MTM.

It’s not the best workaround, especially if you don’t regularly use MTM already, but if you want to capture IntelliTrace data on 64-bit applications it’s currently the only option available.

Author: "ianhu" Tags: "visual studio, 2010, microsoft, intellit..."
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 16 Mar 2010 20:21

When introducing a new feature like IntelliTrace you’re bound to get a lot of incorrect information floating around about exactly what the feature is and how it works. In particular with IntelliTrace I see lots of confusion about exactly what data is collected by IntelliTrace while running. While I’ve mentioned what data IntelliTrace collects on and off in various blog posts I figured that it might be smart to get all the info codified into one blog post.

What IntelliTrace doesn’t collect

When customers first hear about IntelliTrace what is usually conjured to mind is the ability to step backward though their code, checking to see what happened previously with all the full features and information of normal live debugging. It would be wonderful if we could fully deliver on that vision with IntelliTrace but the restrictions of both keeping program execution time overhead and log file size down while still providing useful information prevent that. If your vision of IntelliTrace was collecting the whole world of data and being able to step back though it then the section below might be a bit of a disappointment. But we feel that the choices made really have given the best balance between speed and size of use and collection of valuable data for just about all users.

What IntelliTrace does collect

So now that we’ve set the expectation that IntelliTrace is not going to be collecting all the data that you have access to in the live debugger then what exactly is it collecting? Well the answer to that depends on if we are collecting data at an IntelliTrace event, at a debug stopping event or if we are collecting data at a method entry or exit in calls mode. While the details of what was collected are different in each case we do collect some data in common regardless of the mode. In particular we always collect system information when first starting collection, module load and unload and thread starting and ending events. With the module and the thread events we are able to keep the modules and threads debugger windows correctly updated when you are moving back into your program’s execution.

Another place that we always collect data at regardless of what mode we are running in is at debugger stopping points such as break points. At these point we will collect all basic data types (and all basic data types one level off of objects) that are examined or evaluated in the debugger. This is very handy when you examine a value, take a step forward and see that the value is changed but didn’t make a note of the previous value. Since you examined it (causing IntelliTrace to collect the data on that stopping point) just take a step back in time to see the variable at its previous value. In the example below I’ve taken a few steps forward in the debugger, notice the step events in the flat list on the right, then I’ve jumped back in time to one of the previous debugger steps. In the locals window you can see the variables that were collected at that point. Those will also show up correctly when hovering over those items for datatips or pinnable datatips.

Data Collection at IntelliTrace events

When you hit an IntelliTrace event during your program’s execution you will collect data that has been specifically configured to be captured there. Inside the collectionplan.xml file IntelliTrace events can specify either the collection of basic local variables via DataQuery elements or provide classes that inherit IProgrammableDataQuery to perform more complex data retrieval. The upshot of this is that at IntelliTrace events we only collect a small amount of data that is custom tuned to be relevant to the specific event being examined. If you move back in time to an IntelliTrace event you will most likely just see the [IntelliTrace data has not been collected] message when mousing over any local variables.

This highly guided data collection is intended to keep the overhead when running in just events mode as low as possible. By default IntelliTrace is always running in this mode for managed application so even minor degradations in performance can have a really big effect. It’s important to know that while unsupported officially you can create your own custom IntelliTrace events for richer debugging on your own applications. And for these events you can use the same DataQuery / IProgrammableDataQuery system to collect just the data known to be most important to you at all these points.

Below I’ve shown an example of IntelliTrace being set back to an event in which an Environment Variable was accessed. In this case the event has been configured to collect data on the name of the variable being access which appears both in the event and in the event item in the autos window.

Data Collection in calls mode

When you have a little performance overhead to spare and want to collect much deeper IntelliTrace data you can jump into the options pages and turn on calls mode. In this mode in addition to data collection at IntelliTrace events data is also collected at function entry and exit points. At function entry points we will collect all basic types and basic types one level off of objects for all the parameters that are passed into the function. Also we’ll apply the same principle for the return values of the function. By capturing parameters and return values you can often treat the actual function as a black box and at the lowest cost in terms of collection you can tell what function is spitting out bad data that’s leading to a crash or other error.

Pictured below is IntelliTrace stopped and moved back from a breakpoint to a function enter point. In the autos window you can see that all the primitive data type off of the GizmoManager object that was passed in as a parameter have been collected and are available to view.

~Ian Huff

Author: "ianhu" Tags: "2010, intellitrace, ultimate, intellitra..."
Comments Send by mail Print  Save  Delicious 
Date: Monday, 08 Mar 2010 18:07
If you've read into IntelliTrace events at all you've probably seen the potential in writing your own customer events to log debugging data at specific trouble points in your own code, as opposed to just using the events that are predefined in the .NET Framework.


Sadly in the Visual Studio 2010 release of IntelliTrace we didn't really have the time to give this custom event authoring scenario the respect that it really deserved. So while the functionality is still there we're not officially supporting this scenario in this release, meaning that you won't ever see official documentation on this scenario some place like on MSDN. I was actually planning to document this scenario more from my blog, but one of our more enterprising customers had already figured out much of this scenario and just finished up his own blog post on the topic of creating a custom IntelliTrace events. So I figured that I'd just piggy back on Guillaume Rouchon's excellent efforts and just link his article in here.


Customize IntelliTrace events (note FR language version also available)


Please do note the warning at the start of the article that this scenario is not officially supported in this Release. You can try pinging me on this blog for help but no guarantees with how much we'll be able to help. Also note that if you are making a large investment in this scenario that things might change as we try to clean up this scenario for a later VS release.


Ian Huff

Author: "ianhu" Tags: "visual studio, 2010, intellitrace, ultim..."
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 02 Mar 2010 21:06

Currently here at Microsoft (on the diagnostics team in particular ;) ) many of us are using IntelliTrace in our own day to day debugging work. This dogfooding process is a big part of Microsoft culture and something that we really need to keep growing IntelliTrace in the years ahead. As a way of showing external customers how we’ve using IntelliTrace internally I’ve started up the IntelliTrace tales series of blogs posts, of which this is the first. This series will focus on quick little stories about how we are using IntelliTrace internally here at Microsoft to help with our own debugging and testing efforts.

Spotting swallowed exceptions

Often when debugging you can run into an exception out of the blue, yet that exception doesn’t give you all the information that you need to diagnose the issue. Often this issue can be resolved by examining a deeper or earlier exception that was thrown and then swallowed up before being surfaced to the user. But it’s not always easy to get back in time, enable the correct exceptions and examine exactly what cause this issue. But the fact that IntelliTrace is always running and collecting debug data when exceptions are thrown can often majorly help to alleviate this scenario, as seen below.

In this example one of our developers was working with some XAML code and ended up running into an XAMLParseException as seen below.

Now this initial exception wasn’t informative enough for our developer to take action, so he took a quick look over at the IntelliTrace events list and browsed though the most recent exceptions. There on the list was the actual exception thrown instead of the more general XAMLParseException that ended up bubbling up to the surface.

From that exception he was able to take much better action on the bug jumping right to where he needed to change something like Converter=VisibilityConverter to Converter={StaticResource VisibilityConverter}. I like how this report from the developer that using IntelliTrace can just be an unobtrusive part of your regular debugging. Then when you need more info there are some times where it can really step in and save you a bunch of time and hassle.

Author: "ianhu" Tags: "2010, microsoft, intellitrace, tales"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 26 Feb 2010 22:40

If you’ve looked at previous articles on IntelliTrace you’ll have noticed that IntelliTrace has two main modes that it can run under. In the default mode IntelliTrace will just be collecting debugging data at the predefined IntelliTrace event points. While in the “methods and calls” mode IntelliTrace will be collecting debugging data (including method parameters and return values) at all function enters and exits in all instrumented modules. This methods and calls mode provides a wealth more of information about your program’s recorded execution path but at a substantially higher cost in terms of time overhead and log file size. Aside from the time overhead, this mode also created some issues with how to navigate back and forward to specific points when debugging back in time. Many of the usual debugger UI metaphors didn’t quite fit with browsing our IntelliTrace data so we’ve added in a few new controls to help out with this in Visual Studio 2010 Ultimate.

The IntelliTrace Navigation Bar

The first feature that you will probably run into when debugging in methods and calls mode of IntelliTrace is the navigation bar. If you just turn on methods and calls in the options page and run the debugger to a breakpoint you’ll see this control off on the left just to the right of the breakpoint margin.

This navigation control is intended to help with localized stepping when in historical mode as well as the ability to jump back to live debugging mode quickly from any historical context. In the control above there is only one button highlighted as available. This button is the previous call or event button (looks like a VCR rewind pointing up) and if you go ahead and click that your screen should now look like the picture below.

When you stepped back you might have noticed that there was a slight color change in the navigation gutter. This slight darkening is just a little visual clue to let you know that you are back in historical mode and not working with the live debugger. You’ll also notice that we moved back from our breakpoint to the above call to gizmoMgr.StartGame(). It’s important to remember with this that we only collected debugging data at IntelliTrace Events and at method calls and exits. In historical mode the previous call or event and next call or event (this button looks like the inverse of the step back button that we clicked earlier) buttons won’t move you line by line unless you have an instrumented function call on every line. It’s entirely possible that these could step you over large chunk of code so be aware of that when using them.

The top button on the control represents historical step out. Just like step out in live mode this will take you back the callsite of the current function, with context placed after the method exit of that function. Below the step out button is the step back button, which we’ve already covered above. The next button after that is the step in button. As expected, this button mimics the debugger step in functionality and will either move into a function’s body when hit at a valid callsite or will step to the next location with IntelliTrace data inside the current function. Next is the next callsite or event button which mimics the debugger step over and will move you past each IntelliTrace call or event in the current function without drilling down into any instrumented calls. And finally the return to live button is the bottom button with the hourglass and the yellow arrow. This button only appears when you are in historical mode and when clicked will set your context back to the live mode of the debugger, allowing you to continue program execution.

Another handy feature of this gutter is that the tooltips on each button allow you to see the hotkeys for each stepping command. For next call and step in we’ve borrowed the hotkeys for step over and step in from the debugger so (even though the functionality is slightly different) you can browse forward in a similar fashion as with the live debugger.

Searching for specific methods or lines

Another scenario that you might find yourself in when browsing IntelliTrace data is that you have a specific function of interest and that you want to examine your debugger state at the function at some previous point in execution. To help with this scenario we’ve actually built in search functionality to IntelliTrace that operates when you are running in methods and calls mode. This functionality is a little bit hidden so it seemed smart to call it out in a blog post here.

To perform a search just right click on source code on a specific line of interest when running in methods and calls mode of IntelliTrace (or while debugging an iTrace file containing methods and calls information) and select either “Search for this line in IntelliTrace” or “Search for this method in IntelliTrace.”

 Note that if you are using search for this line you probably want to be on a line containing a method call as IntelliTrace is only capturing data at function call, enters and exits. If you search on a line that we didn’t capture any data on the search will just return zero results, it won’t try to grab other close by results or anything like that. After triggering the search you will get a search bar to appear on the top of the source window that will gradually fill up with results as the IntelliTrace log is combed through for matching results. On the search results bar (pictured below) you will see the context that you are searching for, arrows that allow you to browse to the first result, the previous result, the next result or the last results and an indicator of your current position and the overall count. In the example below we’ve performed a search for the method runbutton_click, our context is not currently at any results (the “- of X” part would read something like “23 of 401” if we were at the 23rd instance of 401 total results), one results was found and that one result is located at a previous point in program execution as just the previous and first buttons are enabled.

If we were to hit the previous result button we’d get what is seen below. In this example we can now see that context has been set back to the method enter of runbutton_click. The search bar indicates that we are on result one of one and since we are already sitting on the only result the forward and back navigation buttons are disabled.


The IntelliTrace toolwindow calls view

So the gutter provides local navigation support while the search functionality provides jumping to a specific location in source. We’ve still got a functionality gap here where users want to navigate around their historical data faster than stepping can support but without knowing the specific location that they want to jump to. To fill that gap we’ve created the calls view in the IntelliTrace toolwindow. To see this view just bring up the IntelliTrace toolwindow when running on calls and methods mode (this window will be in IntelliTrace events mode by default) and click the “Switch to calls view” hyperlink just below the toolbar of that toolwindow. Clicking that will present you with the (slightly confusing, yes we are aware of that J) view shown below.

It’s a lot of info to take in at first glance, so I’ll try to walk though the various pieces of this control. The top pane of this control, we refer to it as the context pane, shows a reverse stack of your current location in source code.  The current location in runButton_click is located at the bottom line of this pane. By tracing up the stack you can see the current code path leading back up to the ThreadStart function. The bottom pane of this control, we call this the content pane, lists all the calls out from this current instance of runButton_click. In the content pane calls that were made out to uninstrumented code are printed out in gray while the calls out to instrumented code are in black with a blue step into icon on the left side of their row.

If you want to set code context to the callsite of anything in the content window just single click on any entry. This action will leave your current pivot in the calls view at the runButton_click function, but your actual context in code will move to the callsite of the specific function selected (see example below where I’ve selected DrawGameBoard in the content window the debugger is now showing debug context and collected data at the time of the call out to DrawGameBoard). You’ll also see a little arrow icon in the content view to indicate what specific callsite you are set to.

If you want to actually drill deeper into an instrumented child method just double click on it in the content pane. Doing so will move that function call up to be the new pivot at the bottom of the context pane. Doing so will update the contents pane to show all the method calls out from this new pivot function. In the example below I’ve double clicked on the DrawGameBoard function that I had single clicked on before. You can see that the code context has now been set to the method entry of the DrawGameBoard function and the toolwindow has now been updated to show the contents of the DrawGameBoard function.

If you want to navigate back up the tree to other further away locations in the call tree you can double click on any item listed in the context (top) pane. Doing so will set that new function as the pivot and update the code context and content pane accordingly. Also note that IntelliTrace events will show up in the content pane in the proper locations. In the picture below I’ve navigated back up to the ButtonBase.OnClick function in the .NET framework. Since it’s in the framework I don’t have source code for this location, but you can the IntelliTrace click event marked with the lightning bolt over in the content pane.

Navigating around in collected IntelliTrace data can be a tricky proposition. Taking debugger data and opening it up to a whole new dimension of time is something that we’ve worked long and hard on in our UI, but we’ve still got a long ways to go. If you are working with IntelliTrace and run into anything in particular that you think could be added / improved just drop me a line, we’re always open to suggestions.


Ian Huff

Author: "ianhu" Tags: "visual studio, tsbt-dev, 2010, historica..."
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 24 Feb 2010 19:27

Visual Studio Magazine just posted an article online providing a little introduction to IntelliTrace from Jeff Levinson. Seemed worth a link if you are looking for another take on IntelliTrace and exciting from my end to see external content starting to get out there for the release of 2010.

 Ian Huff

Author: "ianhu" Tags: "visual studio, intellitrace, magazine"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 10 Feb 2010 21:52

Even if you’ve already spent some time getting familiar with IntelliTrace (our new debugger logging feature in Visual Studio 2010 Ultimate) and playing around with some of the advanced options you might not be aware of the CollectionPlan.xml file and how important it is to controlling IntelliTrace. This is by design, as the collection plan isn’t the most user friendly thing to edit and contains much deeper options then most users would need to access for basic tasks. That being said, a bit of knowledge about the collection plan can help quite a bit in understanding how IntelliTrace works and can give you some expanded options for tuning IntelliTrace for your own use. This file is also especially important to know about if you plan on running IntelliTrace.exe outside of VS, in that case you will always have to pass it a collection plan so it knows how to operate.

What is the collection plan?

The collection plan is a set of options that details to the actual IntelliTrace.exe program (which is executed under the covers by Visual Studio or Microsoft Test Manager) exactly what data should be collected, when that data should be collected and how to create the resulting log file. When you open up an iTrace file after collection this information has already been packed into the file, that way when we open it up for analysis we know exactly how to decode all the debugging data stored in the file.

When first starting up Visual Studio the options in the Tools->Options menus are populated from the default collection plan that we ship with VS. However, after you start to edit options via Tools->Options the values in VS will diverge from the default values in the included collection plan, so be aware of that when manually looking at or configuring the collection plan. You can see this default CollectionPlan.xml from the following location in your VS install.

C:\Program Files\Microsoft Visual Studio 10.0\Team Tools\TraceDebugger Tools\en

What is defined in the collection plan?

The collection plan contains lots of different options for tuning IntelliTrace collection. Below I’ll try to cover at least the broad sections quickly to help explain what each option controls.

The first section is the StartupInfo section. In here you can set default values for log file name and directory. Also you can truncate log size to a specific maximum. IntelliTrace uses circular buffering, so when that limit is hit the oldest events will start to be evicted from the file to make room for the new ones.

The next section, CheckpointOptions, is one that is unlikely to be of much use to most users. When we collect and load iTrace files we use a checkpointing system to load data in chunks to avoid memory pressure. In this section you can change the mode and rate of checkpointing for both thread and notify point events.

The next series of sections all control enabling or disabling different types of instrumentation as well as controlling any specific binaries or processes that are to be excluded from instrumentation. In each section changing the “enabled” value to true will turn on that specific type of data collection. Each section also has a section of modules and processes to specifically exclude from collection. By changing the isExclusionList from “true” to “false” you will turn this exclusion list to an inclusion list in which only the specified modules will collect data. You can specify modules with a public key token, while for both modules and process you can specify a substring to match for inclusion or exclusion. In the example below (snipped from the default CollectionPlan.xml) you can see that we don’t want IntelliTrace to collect data from modules with “Microsoft.” in the title or from processes devenv.exe or mtm.exe (Microsoft Test Manager).

<TraceInstrumentation enabled="false">


    <ModuleList isExclusionList="true">




    <ProcessList isExclusionList="true">





The four different instrumentation categories listed in the collection plan are explained as follows. Trace Instrumentation represents the collection of method enter and exit data. Diagnostic Event Instrumentation represents the collection of data at IntelliTrace events. Web Request Tracking Instrumentation represents the instrumentation that we do to trace IntelliTrace calls across web request boundaries. Detour Instrumentation is actually deprecated and does not affect the current function of IntelliTrace.exe.

After the instrumentation sections there is the TracePointProvider section that makes up the bulk of the collection plan. This section details all of the IntelliTrace events (the Trace Point nomenclature is just an outdated name) that are available for collection. At the start of the section is a Categories section that provides overall categories to group IntelliTrace events into. These are the categories that you will see in the Tools->Options section when selecting IntelliTrace events to enable.

Below the Categories section is a section to define the various modules that we will be referencing in the actual Diagnostic Events. The DiagnosticEventSpecification is the actual heart of the IntelliTrace events, the example below is for the WinForms button clicked gesture.

<DiagnosticEventSpecification enabled="true">


        <SettingsName _locID="settingsName.Button.OnClick">Windows Forms: Button Clicked</SettingsName>

        <SettingsDescription _locID="settingsDescription.Button.OnClick">A Button on a Windows Forms application was clicked.</SettingsDescription>














At the top of the event you can see the enabled field that will control if this event is part of collection (assuming that overall Diagnostic Event Instrumentation is enabled earlier in the file). By default most of the events are turned off and need to be enabled in the VS IDE or the collection plan file (if using IntelliTrace.exe) to be collected. After the main element you have sections to set a category (from the groups that we defined earlier), a name and a description for this specific event. After that is the Binding section that controls when this event is collected. The ModuleSpecificationId, TypeName, MethodName and MethodID fields tell IntelliTrace the exact function that we should collect this event at. In this specific case we are interested in the System.Windows.Forms.Button.OnClick function and every time this function is called in an application running under IntelliTrace we will collect an event.

So that defines when the event is collected. But to understand what data we collect at that point we need to look at the ProgrammableDataQuery section that follows. A programmable data query is the code that runs to actually collect local data of interest when an IntelliTrace event fires. Shown in the screen below is the debugger set back to a WinForms button click event that was previously collected. The Programmable Data Query controls the data that we currently see in the Autos window. For the button click event we’ve created a PDQ to collected the display string of the button in question as this seems the best piece of info for identifying the button when going back over an IntelliTrace log.

In the binary Microsoft.VisualStudio.DefaultDataQueries.dll we define all of the default queries that ship with Visual Studio. It is possible (and customers have already been doing this) that by inheriting from IProgrammableDataQuery you can actually create your own PDQs and use those to create your own IntelliTrace events. Due to time / budget constrains this scenario is not officially supported in this release of IntelliTrace so you won’t see any official documentation on MSDN on this scenario.

With the collection plan introduced and detailed above I’ll look a little deeper at actually running IntelliTrace.exe from the command line with a custom collection plan in a future blog post.

 Ian Huff

Author: "ianhu" Tags: "visual studio, developer, tsbt-dev, team..."
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 03 Feb 2010 21:33

I’m the type of guy that when I fire up a new PC game my first stop before I even start playing is the options menu. I like to tweak things to my liking and even if I leave everything at default I like to know what options are there. If you have the same personality type then when you break out your fresh install of Visual Studio 2010 Ultimate and hear about the new debugging feature IntelliTrace then you might go straightaway to Tools->Options to see what setting you can tweak for this new feature. If so then this guide will be a good little start to telling you what the various options are and how you can tune IntelliTrace to work best for your debugging practices. As with my other articles it’s best to have a basic knowledge of how IntelliTrace works (note the name change from Historical Debugging to IntelliTrace) before jumping into this.

General Options

IntelliTrace options can be accessed via the Tools->Options menu in their own IntelliTrace tab group separate from the other debugger options. First up in the list is the general options page shown below.

The big checkbox on the top gives you the ability to turn off IntelliTrace collection globally. We’ve engineered it so that at default settings IntelliTrace has a very low overhead and given that, it’s turned on by default for all managed projects. But there can be some funny cases where it could inhibit performance so if you want IntelliTrace turned off totally just uncheck this box. Below that box is the selector to collect debugging information either at just IntelliTrace events or at both IntelliTrace events and function enters and exits. By default we have this set to just collect at events. When you turn on collection at function enters and exits you will get way more information about your program’s execution, but that information comes at a high cost in terms of execution time. This slowdown depends on the nature of your programs so you’ll have to experiment to see if this mode works for your scenarios or not.

Advanced Options


When you are running with IntelliTrace all the debugging events and information are getting packed away into an iTrace file behind the scenes. Unless you need to pass off debugging data to someone you might not ever even know that these files exist since we clear them off the system when Visual Studio is closed down. The first option on this page lets you specify the location that these files will be saved to. It also lets you know where to go to if you want to save off a debugging session to look back at later. Just make sure to move it out of this directory while VS is still running if you want to keep it around as this location will be cleared when you close down VS to prevent your hard drive from being flooded with iTrace files. Also to help control file sizes we’ve created a mechanism to limit the size of the iTrace files that are generated (especially with enters and exits turned on they can get large pretty quickly). The combo box below the file location lets you specify a maximum amount that file size can grow to. When we hit that max amount we will start to discard the oldest events and replace them with the newest ones so you don’t have to worry about losing any recent debugging information.

The next checkbox below that lets you turn on or off the IntelliTrace navigational control. The navigational control appears next to your current context when you are running in enters and exits mode to give you some quick hotkeys for stepping forward, back, out and in between previous function calls (more about this in a later blog post). It just takes a bit of space on the left side of your margin, but if you’re not using it and it’s in the way just uncheck this to get rid of it.

The next two options pertain to how symbols are looked up for IntelliTrace. The first option “Enable Team Foundation Server symbol path lookup” means that when you are going to analyze an iTrace file if you have Team Build configured then iTrace files with correct build info can automatically try to pull down the matching PDBs for the build this iTrace file was collected on. The second option “Prompt to enable source server support” comes into play when you open an iTrace file in which PDBs support source server lookup. In this scenario if the debugger is currently configured to not use source servers you will get a prompt when launching debugging from the iTrace to selected if you want to use source servers or not. If you are getting this nag prompt all the time just uncheck this box to remove the prompt. Note that in general IntelliTrace will use the same symbol paths as the debugger when looking to resolve symbols.

IntelliTrace Events


The IntelliTrace events page allows you to configure what at what events IntelliTrace will collect and log debugging data. Events are broken up into broad categories that can be enabled as a block or just event by event. In just about all default cases collecting IntelliTrace on just events is very low overhead but some specific scenarios could create noticeable slowdown in your application if the wrong set of events is configured. For example, we have an event configured for the WPF: Expander.Expanded event. Given normal UI programming idioms Expander.Expanded is likely to be called by user action and not in any type of tight loop. But if you create some odd program that programmatically opens and closes an Expander as fast as possible you’re going to be taking more of a hit from the IntelliTrace event collected on every expand. So be smart and don’t turn on Console.WriteLine events if you are continually writing all sorts of stuff to the console with no breaks between calls.



When collecting data you don’t always want to instrument everything under the sun. Especially when collecting on method enters and exits you will build up way too much data. This page lets you specify a substring which will exclude any module whos name contains that substring. By clicking the radio button you can also convert it to an allow list if you just have specific binaries that you are interested in. By default it’s a block list with Microsoft. (to exclude framework code) and several Microsoft specific public keys excluded from instrumentation.

Ian Huff

Author: "ianhu" Tags: "visual studio, developer, 2010, intellit..."
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 26 Jan 2010 17:15

One of our main goals with IntelliTrace was to help close down the “no repro” gap between developers and testers. It’s frustrating and a waste of time and resources to have testers and developers going back and forth with testers reporting issues and developers unable to repro and investigate those issues in their development environment. IntelliTrace provides an obvious bridge of this gap with the ability for testers to collect iTrace files on their test machines and then hand those same files off to developers to debug in their usual Visual Studio environment. But this scenario only works if we make it as easy and seamless as possible both for testers to collect these files and for the developers to be able to open them associated with the proper bugs. To this end we’ve worked hard on IntelliTrace integration with both the new Microsoft Test Manager and Team Foundation Server work item tracking. Note that both IntelliTrace and Test Manager will be available for the first time with the Visual Studio 2010 wave of products (IntelliTrace and MTM are available in Visual Studio 2010 Ultimate and MTM is available in the Visual Studio Test Elements SKU).

Microsoft Test Manager

Before we get deeper into the integration of IntelliTrace and Microsoft Test Manager it’s best to have a little basic knowledge about what Test Manager is and how it can help your testing efforts. Test Manager is a new application that runs separate from Visual Studio and is used for creating, managing and running tests associated with a TFS project. It covers a broad spectrum of automated, manual, unit and load tests and includes test plan management to help coordinate testing efforts.

During test execution Test Manager can help to collect various data items such as screenshots, desktop videos and comments to be associated with each test run. Then, using TFS, this data can be persisted and attached to any bug that is filed off of a test failure. This is where IntelliTrace comes into the picture as IntelliTrace iTrace files are one of the items that can be collected during a test run and attached to a bug. So with a minimum of effort a manual tester can capture a file chock full of debugger info and have it automatically attached to the bug that they are currently filing. Then when the developer opens the bug in TFS they are able to open up the iTrace file in their IDE and start debugging into the exact run that caused the tester to file the bug. It’s important to note here (especially if you’ve not read one of the more general articles on IntelliTrace) that the IntelliTrace log just contains a specific set of debugging data that is only collected at specific points during execution. The developer is not actually getting a fully debuggable / executable look at the program that was running on the tester’s machine. That being said, the data provided from an iTrace file, especially one in which collection has been tuned to common problem issues in the code being tested, can be very usefully in either guiding the start of a debugging investigation or in diagnosing the issue totally.

Collecting IntelliTrace data with Microsoft Test Manager

Microsoft Test Manager is a big, big new part of Microsoft’s testing strategy going forward. So as someone not even on the Test team I’m not going to do it the disservice of trying to cover all of its functionality. If you are interested in more on Test Manager I’d suggest starting with the MSDN docs or the Team Test blog, for this article I’m just going to be covering how IntelliTrace integrates with it and will be assuming that you know the basics of connecting MTM to your TFS solution and have a few tests and test plans available.

For setting up IntelliTrace data were going to start by navigating to the Test Settings tab in the Lab Manager section of MTM. Then either create a new set of test settings or open up an existing test settings item. From the main test settings page click on the “Data and Diagnostics” category over on the left side to navigate to the screen shown below.

Test Settings

From this screen you can just click the box for IntelliTrace to turn automatic IntelliTrace collection on for the current set of test settings. If you click the Customize button to the right of that you will be taken to a screen that approximates the Tools->Options IntelliTrace page in Visual Studio. Look for a blog posting in the future giving more detailed info about those various options, but for now we will just leave the settings at default values.

IntelliTrace Settings

Now that we’ve turned on IntelliTrace settings for a specific set of test settings we need to start one of our tests using those test settings. To make sure you get the correct test settings you can right-click the test and choose “Run with options” from the drop down setting, then make sure that you have the correct test settings selected in the subsequent dialog. Note that IntelliTrace is available only for automated (coded and UI) and manual test types.

Now during your test run whenever you file a bug the last section of debugging data collected via IntelliTrace will be automatically sectioned off and attached to the bug. Note that you can still keep running your test and filing more bugs without losing any IntelliTrace information, as we are just saving off a copy of the most recent section of IntelliTrace data when a bug is filed. In the picture of the new bug window you can see the .iTrace file in the lower left in the Details tab.

IntelliTrace Attached

Opening IntelliTrace data with Microsoft Test Manager

Now that the bug has been created and the IntelliTrace data has been associated with it all we have to do is open it up from Team Explorer in the Visual Studio IDE. Just open the bug (it will look the same as the picture above) and click the link to the iTrace file to open it up. There is more in-depth info on iTrace files here, the only real difference with this file is that the Test Steps section on the summary page is populated. This section will list all the test steps that were seen during this IntelliTrace section grouped by test case and test session. In the example I have shown below I didn’t mark any test steps either failed or passed so nothing is currently listed. If you double click on a test step debugging will start up at the location closest in execution to where that test step was marked either pass or fail. From there you can browse around all the IntelliTrace info collected to see if you can get started on a solution for the issue logged by the tester.

iTrace File


Ian Huff

Author: "ianhu" Tags: "visual studio team system, visual studio..."
Comments Send by mail Print  Save  Delicious 
Date: Monday, 16 Nov 2009 21:26

If you are not familiar with the new IntelliTrace feature in Visual Studio Team System 2010 than you might want to first check out either my or John Robbins’ introductions to this feature as a general overview of IntelliTrace would be helpful before digging into this article.

What is an iTrace file?

In my introduction article linked above I talked a little about how IntelliTrace captures the current state of the debugger at multiple points during a program’s execution and, when F5 debugging, allows you to debug back in time to previous debug states in your program. This in and of itself is a very handy feature, but in this day and age it’s often hard to have a bug with an easy and consistent repro that you can debug on a local dev box.

The solution to this lack of a local repro is that not only does IntelliTrace enhance your local debugging experience, but it also saves all the collected debugger data points into a trace log file (.itrace extension) that can then be opened and debugged using Visual Studio later and on a different machine. The analogy for this scenario is that of a black box in an airplane in that iTrace files provide a “voice from the grave” from crashed programs that allow for a developer to debug in and around the point of failure after the fact.

Integration with Microsoft Test and Lab Manager

One of the big new testing features being added in Visual Studio Team System 2010 is the Microsoft Test and Lab Manager (more info on MTLM on their blog site here). MTLM is a standalone tool that focuses on the tester role by providing a TFS-integrated UI for managing test cases and lab environments without the overhead of a full Visual Studio installation. Since one of the key focuses of IntelliTrace is to try to eliminate the “no repro” disconnect between developers and testers we knew that we needed to get IntelliTrace integrated with MTLM. This integration is accomplished via a combination of TFS and iTrace files. I’ll detail the scenario more in a future blog post, but at a basic level at anytime during a test run a tester using MTLM can choose to file a bug on a specific test step failure and when that bug is filed an iTrace file of all the recent debugging events and exceptions is automatically collected and attached to the bug. Then, when the developer opens up the bug in Visual Studio, they can just click the iTrace file linked in the bug and be debugging into the exact execution path in which the tester was seeing the failure.

iTrace files collected during debugging

Whenever you are running a normal F5 debugging session from the Visual Studio IDE with IntelliTrace turned on you are collected an iTrace file in the background. Now in this scenario you can pretty easily be using IntelliTrace features like browsing back in debug history without ever noticing that this file exists, especially since to keep your system from getting clogged with iTrace files we clean these files out when Visual Studio is shut down. So if you ran into something interesting while debugging from the IDE with IntelliTrace you will need to copy the iTrace file out from its saved location to keep it from being cleaned up. Just look under the following file path to see the iTrace files that have been collected during the current VS session: C:\Users\All Users\Microsoft Visual Studio\10.0\TraceDebugging. With both the IDE scenario and the MTLM scenario iTrace files will be truncated at a specific size (currently set to 100MB by default) to keep from filling up your hard drive. This truncation value will discard older events from the log and can be changed from Tools->Options->IntelliTrace->Advanced.

Collecting iTrace files from the command line

If you want to collect iTrace debugging files without having Visual Studio up and running we’ve provided the IntelliTrace.exe command line tool. IntelliTrace.exe will get its own blog entry sometime in the future but if you want to try figuring out how to get it running just try starting with the /? command for help. Intellitrace.exe is located in your Visual Studio install at “Team Tools\TraceDebugger Tools.”

Working with iTrace files in Visual Studio

(Note: All screenshots are from my current working build and will look a little different from Beta 2 builds)

Regardless of if you collected your iTrace file via MTLM, Visual Studio or IntelliTrace.exe when you first open it up in Visual Studio you will end up with a document that looks somewhat like the below.


Note that at this point we’ve just opened up the document summarizing the debugging session. No debugging session has been started and the time to open up the document should be pretty minimal. At the top of the document you will see a chart showing all the threads that were running during the life of this debugging session. Below that there are a series of lists containing more information about Threads, Exceptions, Test Events, System Information and Modules for the debugging session. Currently, the Exceptions list is expanded out and showing all the exceptions that were encountered during the debugging run. The exception that is currently selected in this list is represented in the thread timeline by a vertical red bar. This bar helps you match up exactly where in your program’s execution an exception was being thrown. In addition to supplying the thread, HResult and message of the exception we will list out the stack of each thrown exception in the textbox below the exceptions list.

Threads List:


The threads list provides a table view of the threads active during your debugging session. The actively selected thread will be highlighted in the thread chart above, and vice-versa.

System Info:


The system information section contains a set of information about the computer that this iTrace file was collected on. It seems pretty basic, but this info has already come in useful several times for me during my development work. In particular knowing the OS, the number of processers and the CLR version have been useful to me when investigating bugs that QA has provided me with iTrace files for.

Test Data:

I don’t have a picture of this right now, as I’m going to be speaking more about this when I cover MTLM integration in greater depth. When you collected an iTrace file via MTLM (since this file was not collected via MTLM the section is grayed out) the test data section will contain info on all the test steps that were logged via MTLM during the execution of the tests.



As expected, this control lists out the modules that were loaded during debugging.

On all of these controls there is a search box above the list. If you are looking for a specific module or exception just start typing into one of those boxes to narrow down the results being shown in the list.

Starting a debug session from an iTrace file

Up until now we’ve been dealing with the information that you can glean from the summary page of an iTrace file. But while it can be quite informative the real point of the summary page is to allow the user to jump into debugging close to some point of interest. Lots of information can be collected during a debugging session and if you were to just jump into debugging an iTrace file blindly it could take a while to get to the correct location to start diagnosing a failure.

From an iTrace summary page you can jump into debugging from a thread, from an exception or from a test event (with a caveat that I’ll mention later for test events). For threads you can double click on a thread in the thread chart, double click on a thread in the threads list or click the “Start Debugging” button. Any of these options will start up the debugger and jump you to the last event that we collected on that thread. We chose the last event as in many cases an iTrace log captured a failure or crash so starting at the end point makes more sense than starting at the beginning of the log when all was running smoothly. For exceptions, you can either double click the exception in the list, or click the “Start Debugging” button below the exception list. Starting debugging on an exception will start the debug session exactly on the exception event selected (see picture below). Note that starting the session is a slightly slow process and it might take a bit for the debugger to get up and running. Test data function much the same as exception with the only difference being that since test events are not represented in our debugging UI debugging context will actually be set to the event in time nearest the selected test event.


This entry is mainly focused on the iTrace file and summary page so I’ll cover more about the IntelliTrace UI during debugging later. But as for now, you can look at the picture above and see that debugging has been started and our context has been set to the exception that I clicked in the summary page by looking at the source location and the autos window.


Up next

Coming up next I’ll be talking more about the various controls that you can use to move around in IntelliTrace debugging data and what data we will be showing in the Visual Studio UI.

 Ian Huff


Author: "ianhu" Tags: "visual studio team system, visual studio..."
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 20 Oct 2009 16:14

It’s been a little while since I’ve gotten a post up here due in part to two main reasons. First off, we’ve been pushing really hard as a dev team in getting Beta 2 polished up and out the door. And secondly, the actual name of our feature has been in flux for a little while and I wanted that to be sorted out before I blogged any more.

So as of this past Monday Beta 2 of Visual Studio 2010 and .NET Framework 4.0 have been released out to customers (MSDN subscribers can grab the download from here). And at the same time we finally got clearance to use IntelliTrace as the official name for our new historical debugging feature. So from here on out expect to see much more of me blogging about the ways that IntelliTrace can help improve your debugging process and also more about how we built V1 of IntelliTrace and what we’re going to be doing for V2.

For an introduction on what Intellitrace is all about check out my intro on the topic or a recent blog post from John Robbins.

Ian Huff

Author: "ianhu" Tags: "visual studio, team system, 2010, histor..."
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 18 Jun 2009 17:13

IntelliTrace Events

(edited 1/27/2010 to update branding from Diagnostic Event to IntelliTrace Event) 

If this post is the first time that you have heard about the IntelliTrace feature in Visual Studio Team System 2010, then you might want to first take a look at my little overview of this nifty new feature before diving into the info below (John Robbins also has an excellent blog post here). But if you don’t want the read that article the gist of it is that IntelliTrace records your debugging context at specific points during your program’s execution. Then, while debugging (or after finishing debugging if you load up one of the log files that we collect) you can move back to the locations where we collected data to examine things like locals, parameters and return values. You can set IntelliTrace to collect data at every function enter and exit but, to avoid the slowdown that this causes by default we just collected data at specifically defined IntelliTrace events. This article is here to give you some more information on IntelliTrace events and on how they are used.

What is an IntelliTrace event?

Simply put, a IntelliTrace event is a specific point during a debugging session that is likely to be of interest to the programmer when debugging. Examples of IntelliTrace events that we define are things like opening a file, writing to the registry or clicking on a WPF button. IntelliTrace events were selected with the duel criteria of being placed at interesting locations, but not being at any locations that would be called too often and create excessive slowdown in your application.

How will I see what IntelliTrace events are being collected?

Since collecting data on just IntelliTrace events is low overhead it will be on by default with all managed projects in Visual Studio 2010 Ultimate. So if you just start up a basic C# console application, set a break point and hit run you will see the new Debug History tool window pop up docked with the Solution Explorer. By default this window will show you a list of all the IntelliTrace events that have been collected during your current debugging session. Below I’ve posted a picture of the Debug History window for a little WinForms application that I created to exercise a few different IntelliTrace events (file open, button click and registry write). From this picture I’ll provide a brief walkthrough of the IntelliTrace Events control (Note: I’m working with the current Beta 2 in progress bits, so please understand both that it will look a little different from Beta 1 and that the final RTM product might also look somewhat different). At the point this picture was taken the program was stopped in the live debugger with a breakpoint at the end of the main function.

The main space in this control is occupied by a flat, chronological list of the most recent IntelliTrace events hit during your debugging session. If you look at the bottom of the list you will see the “Live Event” event, this entry corresponds to the current location of the live debugger and tells us that we are at a breakpoint at line 26 in the file form1.cs. Moving up from there you can see that we recorded a few registry access events, a few file events and a user input gesture. Each entry in this list represents a specific location where we grabbed much of the current debugger state and stuffed it into our log file. Using this flat list you can jump back in time to any of the IntelliTrace events that were captured. Note that there are page forward and page back buttons at the bottom of the control for if we get to large a number of events collected to browse easily.

If you are looking for a specific event or event type in the list we’ve provided a few browsing aides in this tool window. First off, if you start typing in the search box the events in the list will be filtered down to just those events containing your search text, very helpful when trying to track down a specific exception that was thrown. And secondly, if you use the “All Categories” and “All Threads” dropdowns you can restrict the list into showing just events from specific categories (see the below tools->options section to learn more about our categories) or just showing events from specific threads.

What happens to the debugger when I select an IntelliTrace event?

 In the picture below I’ve selected the “File: Close” event. Also I’m showing the rest of the IDE along with the tool window so you can see how the debugging context changed with the switch over to historical mode.

Notice that in the IntelliTrace events tool window we’ve selected the event that you are currently at, as well giving some inline expanded information about the event. For this file close event we give a little more information about the event “Close a FileStream accessing the path C:\2xj3hs4l.aox,” we mention the thread the event took place on and we give some quick links to other debugger tool windows that might have some more information about the event.

Aside from the tool window being updated there were some other changes in the IDE when we selected an event in the flat list and moved back to historical mode. First off, if you look at the source editor you won’t see the usual yellow arrow indicating the next statement anywhere in the source file. Instead you will see a curved orange arrow located at the end of the “using(FileStream fs =…)” block of code. The curved orange arrow indicates that the current historical debugger context is somewhere in external code below the given statement, roughly corresponding to the green arrow that the live debugger uses when you are stopped in external code. In general when you move between IntelliTrace events your debugger context will not be located directly in user code, as the IntelliTrace events are all defined at specific points in Microsoft framework code.

This can be better illustrated by looking at the callstack window in the picture above. When we moved back to the IntelliTrace event the callstack updated to the time when we captured that event. The actual code context is down in mscorlib.dll at FileStream.Dispose, but since we don’t have code for that location we show the location in user code that triggered that specific event (that being when the end of the using statement disposed of the FileStream). When moving back in history we will also populate the values in the autos and the watch windows with some specific limitations. Those limitations being that we will just capture primitive values (strings, doubles, ints ect…) and two levels of primitive values off of any objects.

How can I change what IntelliTrace events are captured?

We’ve tried to pick a solid set of IntelliTrace events that provide coverage of some common problem areas and that are not called too often in a program (in which case collection could slow down your application too much). But in specific cases our default settings might not be the right solution for every user, so we’ve added the ability to trim or add to this initial set of events via a tools options page.

If you open up Tool->Options you will see a new option for IntelliTrace in the list. Clicking this item will first off take you to the general settings page shown below.

On this page you will notice that by default (for managed projects) IntelliTrace is turned on and is set to collect debugging data just at IntelliTrace events. In a later blog post I’ll cover more about the “Event, Methods and Parameters” setting but in a nutshell turning on that setting makes you collect data at IntelliTrace events and at all function enters and exits. This gives you much more complete data to look at, but comes at a much higher cost in terms of application performance while debugging.

For now, just we’ll just leave the general settings as they are and move on to the “IntelliTrace Events” tab shown below.

On this page you can see the selection of IntelliTrace events that we provide broken down by various framework categories. By checking or unchecking events you will either add them to or remove them from the set of events that we capture while debugging. A example could be if your application is doing tons of file opens and file closes you could turn off the collection of file events to keep collection from slowing down too much.

IntelliTrace Events Summary

With the above post I’ve tried to lay out a little bit of how IntelliTrace events work, how you can use them to browse back into debugging history and how to configure what events are collected. By collecting data at common fault areas / points of debugging interest we’ve think that they will be useful in solving many debugging issues and the low overhead allows us to turn them on by default for all users. However, there is much more to the IntelliTrace than just events collection. For example, one of the more interesting features is that the debugging log can be saved off and opened in the debugger later on a different machine. You can see how IntelliTrace events would be highly useful in this scenario, as a tester could collected a low overhead log, attach it to the bug and then the developer could debug back in time to specific events on their own machine at a later date. In my next blog posting I’ll be digging deeper into this log scenario and examining how it can help to solve the “no repro” issues that plague developer / tester communication.

Ian Huff

Author: "ianhu" Tags: "visual studio team system, visual studio..."
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 14 May 2009 02:49

Back to Blogging

So for the last 18 months or so there has been nary a peep on this blog about anything. One contributing factor to this extended period of silence would happen to be the fact that I now have an 18 month old daughter and I was feeling a bit of a time crunch. But the actual big reason is that I moved teams within Visual Studio and was working on a project that was flying under the radar for about a year. This new product is the Historical Debugger that is going to be shipping with Visual Studio 2010 (I’ll have an introduction post up for this feature later today) and it’s much more out in the open now so I’m ready to start blogging up some hype for it. I’m still a part of the Visual Studio Diagnostics team and just down the hallway from my profiler ex-coworkers so I’m still around for answering / forwarding profiler questions. As before there will be a few odd-ball posting here and there, but in general just expect lots of information, samples and chat about the Historical Debugging feature.


Author: "ianhu" Tags: "visual studio team system, visual studio..."
Comments Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader