I have had the pleasure over the past few months to spend some time playing with an early rendition of " Elevation of Privilege: The Threat Modeling Game". According to Adam, "Elevation of Privilege is the easiest way to get started threat modeling". I couldn't agree more. If you have a team that is new to the whole process of threat modeling, you will want to check it out. If you are at RSA this week, drop by the Microsoft booth and pick the game up for free. If you aren't, you can download it here.
EoP is a card game for 3-6 players. The deck contains 74 playing cards in 6 suits: one suit for each of the STRIDE threats (Spoofing, Tampering, Repudiation, Information disclosure, Denial of Service and Elevation of Privilege). Each card has a more specific threat on it. You can see a short video on how to play and some more information about the game by checking our Adam's post here. In the end, it is a game that makes it possible to have more fun when thinking about threats. And that's a good thing.
Even more impressive is that they have released the game under Creative Commons Attribution license which gives you freedom to share, adapt and remix the game. So you if you feel you can improve up this, step up and let everyone know!!
Congratulations to the SDL team at Microsoft for creating an innovative way to approach the concept of threat modeling.
So this week my buddy Charlie and I threw a Windows 7 party for the IT pro community in Vancouver, BC at the Microsoft office.
The office could only handle 80 people, and we simply had to turn people away. Sorry to those who weren't allowed to come. Many people came early, and hung out in the hallway even before they were allowed in.
With almost a 100 people in that hallway just out of the elevator, that hall was WARM. I felt bad for some of the people as you could tell they were overheating. But we weren't ready to let them in as we set up the rooms with different Windows 7 systems.
When we did open the doors it was a mad rush for everyone to get in where it was cooler and they could grab a cold one and cool down. Thankfully everyone was patient and polite. Thanks to everyone for that!
Once they got in, there were several different rooms that they could go hang out in. In one room, Charlie had brought a HP Media Touchsmart so people could experience the new multi touch functionality of Windows 7. Kerry Brown, a fellow MVP with experience in Windows shell, stayed in the room teaching people all the new shell features like Libraries, Jump Lists etc, and I am told schooled some admins on the nitty gritty of Power Shell. Good job Kerry! Thanks for helping out!!!
It was interesting as everytime I looked in that room, people were surrounded around the device playing with the TouchPack games and with Virtual Earth. It was interesting to hear my buddy Alan comment that his experience on his iPhone with multitouch, especially with Google Earth, was far superior to what he was seeing there. Maybe that is something Microsoft can take away from that. Of course, big difference on a 24 inch monitor and a small iPhone screen. But the point is well taken.
We had the biggest crowds when we did demos in the main presentation room. When I was presenting on DirectAccess security I had my good friend Roger Benes (a Microsoft FTE) demonstrate how Microsoft used DirectAccess themselves. Using the Microsoft guest wireless he connected seamlessly to Microsoft's corpnet, which allowed us to demonstrate the policy control and easy of use of the technology. I am told a lot of people enjoyed that session, with several taking that experience back to their own office to discuss deployment. Thats always good to hear.
Charlie impressed the crowd showing how to migrate from Windows XP and Vista to Windows 7. He demonstrated Windows Easy Transfer and Anytime Upgrades and took the time to explain the gotchas in the experience. He even had me demonstrate XP mode on my laptop so people could see how they could maintain application compatibility with a legacy Windows XP virtualized on Windows 7.
Of course, I had a lot of fun hanging out in the far back room. I got to demonstrate some of the security stuff built into Windows 7 like BitLocker, AppLocker and BitLocker to Go. I was even asked about Parental Controls which I couldn't show on my laptop since its domain joined, but was able to show on a demo box Roger had brought for people to play with.
Some of the more interesting things I helped facilitate was asking my buddy Alan to bring his Macbook in. He is a great photographer who works with Linux and OSX a fair bit, on top of using Windows. Actually, all the photos you see in this post were taken by him. Thanks for sharing them Alan!
Anyways, I convinced him to let us use his Macbook to install Windows 7. He reluctantly agreed, as you can see from the picture below when he was looking at the Snow Leopard and Windows 7 media together. :-)
We had a fair number of people crowd around his Macbook as he went through the process of installing Bootcamp and deploying Windows 7. Interestingly enough, it flawlessly converted that Apple hardware into a powerful Windows 7 system in about 20 minutes.
Charlie and I were REALLY busy. We had presented on different sessions in different rooms throughout the night. Actually, I very rarely even saw him except for a few times when he called me in to help out with a demo. Sorry we couldn't party more together Charlie. And my apologies to those that were looking forward to our traditional "Frick and Frack" show where we banter back and forth.
Many of you may not know that outside of computers, I am an avid indie filmmaker. Actually, that is giving me too much credit. I am an amateur cinematographer at best, who had high hopes that I would get a chance to film everyone's impressions throughout the party. Unfortunately, I was so busy presenting, I had almost NO TIME to get any film recorded. *sigh* Alan did get a snap of a rare moment when I actually caught someone on film.
Of course I can't complain too much. I had a great time getting to show all the neat features in Windows 7, and answering the tonnes of questions that people had.
Of course, when the night finally wound down, it was nice to close out the party and watch the Vancouver skyline change. When we were done, we had the opportunity to hang with our IT friends in Vancouver and bring in the birth of Windows 7.
I have several people I would like to thank for making the evening possible. Charlie and I couldn't have done it without the support of people like Graham from VanTUG, Jas from VanSBS and Roger from Microsoft. Speaking of Microsoft, I have to give a shout out to Sim, Sasha and Ljupco in the MVP team who helped us get through all the red tape to throw the party at Microsoft's office. And many thanks to Brent, Alan and Kerry for helping us out throughout the event. My thanks to all of you.
I hope everyone had a good time. And if anything, Charlie and I hope you learned something that will help you deploy and use Windows 7 in your organizations. Happy birthday Windows 7. Welcome to a new world without walls!
P.S. All the pictures you see here were taken by Alan and used with his permission. You can check out some of his other amazing work at bailwardphotography.com.
It's only a few days away. The official launch of Windows 7 is here!
And of course, that means its time to party!!! You may have heard about the Windows 7 House Parties that are being thrown all around the world. Basically thousands of small groups of people are getting together to see what Windows 7 can do.
Personally, I thought we needed to do more. So fellow MVP and friend Charlie Russel and I decided we would throw our own party. But focused on IT pros and not the consumer angle. We plan to have a lot of fun, showing the cool features of Windows 7 for IT pros like BitLocker, AppLocker and DirectAccess. We plan to bring a bunch of laptops and show new shell extensions, Powershell, new multitouch features and basically sit around and enjoy hours of Q&A; for those that haven't tried it yet. We are even planning on installing Windows 7 on a guest's Macbook to show how well it does using Bootcamp on Apple hardware and even on small netbooks.
I also wanted to send a message out to the Vancouver IT community to clear up some misconceptions. This is a party hosted by Charlie and myself. This is NOT a Microsoft event. Microsoft was gracious enough to let us use their facility and even sprung for some of the cost for pizza. However, they never planned this out. Nor did the local VanTUG and VanSBS groups.
Our party is an INVITATION ONLY event. Because we are limited in our own budget and constrained in where we could have the party... we only have enough room for 75 people. So we could only allow a certain number of our friends to come. Charlie and I decided the best way to handle this would be to simply invite who we wanted, and then open it to our friends at the local user groups on a first come, first served basis. This is why there is a cap on the registration on the event, and why it booked up so quickly.
I am hearing through the grapeline that there is a LOT of descent in the Vancouver IT community who feel that Microsoft, VanTUG and VanSBS did a poor job organizing this. >LET ME BE CLEAR. This is a personal party that Charlie and I organized. If you were lucky enough to get an invitation and registered, great. But if you didn't, don't take it out on Microsoft, the local usergroups or their leaders. It's not their fault!!!
We are using our own money and time to throw this party. Please be considerate and respect that we couldn't invite all of you. I am happy to see there is so much excitement about Windows 7 and that you wanted to party with us. And I am sorry if you feel it isn't fair that you didn't get invited. Please feel free to share your own Windows 7 experience, and host your own party. We may be the only IT pro party during the Windows 7 launch, but nothing says you can't have your own!
So party on. Welcome to a new world. Welcome to Windows 7!
Hey guys. I noticed Twitter is a buzz with a few podcast interviews I did on RunAs Radio lately. I thought I will post the links for those of you who don't follow such tweets.
There were two interviews I did last month:
The first interview was discussion on free tools available for network monitoring and diagnostics. The second was some in depth discussion on using DirectAccess with Windows 7 and Windows Server 2008 R2. I do hope you find both interviews fun and useful.
So have you ever tried to restrict access to your applications in a way so that you can maintain least privilege?
I do. All the time. And recently it blew up in my face, and I want to share my experience so others can learn from my failure.
Let me show you a faulty line of code:
if( principal.IsInRole( "Administrators" ) )
Seems rather harmless doesn't it? Can you spot the defect? Come on... its sitting right in the subject of this post.
Checking to see if the current user is in the "Administrators" group is a good idea. And using WindowsPrincipal is an appropriate way to do it. But you have to remember that not EVERYONE speaks English. In our particular case, we found a customer installed our product using English, but had a user with a French language pack. Guess what... the above code didn't work for them. Why? Because the local administrators group is actually "Administrateurs".
The fix is rather trivial:
SecurityIdentifier sid = new SecurityIdentifier( WellKnownSidType.BuiltinAdministratorsSid, null );
By using the well known SID for the Administrators group, we ensure the check regardless of the name or language used.
Lesson learned the hard way for me. We have an entire new class of defect we are auditing for, which we have found in several places in our code. it always fails securely, NOT letting them do anything, but that's not the point. It is still a defect. Other accounts we weren't considering were "Network Service" (its an ugly name on a German target) and "Guest". Just to name a few.
Hope you can learn from my mistake on that one. That's a silly but common error you may or may not be considering in your own code.
OK, so anyone who knows me expects that I stay up on the bleeding edge when it comes to dev tools and operating systems. Yes, I have been using Windows 7 for almost a year now and have been loving it. However, I never ran it on my production dev environment as I felt I did not what to disrupt our software development workflow until Windows 7 was in final release. With it out to RTM now, I felt it was as good as time as any to migrate, especially since we recently released our latest build of our own product and have a bit of time to do this.
So last week I deployed Windows 7 to both of my production dev systems, as well as the primary QA lab workstations. It was the worst thing I could ever have done, halting all major development and test authoring in our office due to a MAJOR gotcha Microsoft failed to let us know about during the beta and RC.
Ready for this....
You cannot run Virtual PC 7 (beta) in Windows 7 WITHOUT hardware virtualization. OK, I can live with that, since the new XP mode (which is an excellent feature) may very well need it. That didn't concern me. It was my fall back that failed to work that blew my mind...
You cannot run Virtual PC 2007 in Windows 7, as they have a hard block preventing it from being installed on Windows 7 due to compatibility issues. So the same machine that I have been using for development using Vista for a few years has now become a glorified browsing brick. I cannot do any of my kernel mode and system level development or debugging as I am not ALLOWED to install Virtual PC 2007 on the same hardware that worked before. *sigh*
What surprised me is that Ben, the Virtual PC Guy at Microsoft blogged that it was possible to run Virtual PC on Windows 7, and in his own words:
While all the integration aspects of Virtual Machine Additions work (mouse integration, shared folders, etc...) there is no performance tuning for Windows 7 at this stage - so for best performance you should use a system with hardware vitalization support.
That sounds to me like it will still work without hardware virtualization. Seems that is not the case.
Since Windows 7 is already to RTM, if this is a block due to Windows, it isn't going to be fixed anytime soon. So hopefully they can do something in the Virtual PC side of the equation, or they are going to disappoint a lot of unknowing developers.
This just became a MAJOR blocking issue for many dev shops that are using Virtual PC for isolated testing.
If this concerns you, then I recommend you download Intel's Processor Identification Utility so you can check to see if your dev environment is capable of running hardware virtualization.
Failing to do so might get you stuck like I did, now having me decide if I want to degrade back to Windows Vista just to get work done. There goes another day to prep my main systems again. *sigh*
UPDATE: Fellow MVP Bill Grant has provided me a solution to my delimma. It appears the issue is because Virtual PC 7 (beta), a built in component for Windows 7 when installed, is causing the blocking issue. By going into "Turn Windows features on or off" and removing Virtual PC support (and effectively removing XP mode support), Virtual PC 2007 can then be installed on machines that do not have hardware virtualization support.
This isn't the most optimal behaviour, but acceptable. Since without VT support in my CPU I can't use XP mode anyways, removing it does not limit WIndows 7 from functioning. I have reported to Microsoft on this odd behaviour since:
- Virtual PC 7 and XP Mode simply shouldn't be installing if my CPU isn't supported
- When the Customer Experience dialog pops up there is an option to "Check for Solutions Online". This is a PERFECT time where they could explain to uninstall Virtual PC 7 and XP mode support built into Windows 7 so Virtual PC 2007 will not block. Right now it reports that no solution is available.
So if you do NOT have VT support in your CPU, please uninstall Virtual PC 7 support if you installed it. VPC 2007 will then properly install for you.
So recently Microsoft banned memcpy() from their SDL process, which got several of us talking about perf hits and the likes when using the replacement memcpy_s, especially since it has SAL mapped to it. For those that don't know, SAL is the "Standard Annotation Language" that allows programmers to explicitly state the contracts between params that are implicit in C/C++ code. I have to admit its sometimes hard to read SAL annotations, but it works extremely well to be able to help compilers know when things won't play nice. It is great for static code analysis of args in functions, which is why it works so sweet for things like memcpy_s()... as it will enforce checks for length between buffers.
Anyways, during the discussion Michael Howard said something that had me fall off my chair laughing. And I just had to share it with everyone, because I think it would make a great tshirt in the midst of this debate:
Oh, I'm thinking of banning zero's next - so we can no longer have DIV/0 bugs! Waddya think?
OK.. so its a Friday and that is funny to only a few of us. Still great fun though.
Have a great long weekend! (For you Canadian folks that is)
So in today's session at SMBNation that I spoke at, I showed how to use TS RemoteApp with TS Gateway on SBS2008 to deliver remote applications through Remote Web Workplace. It is one of the most cool features in the Windows Server 2008 operating system. But we have to remember what its doing.
Part of the conversation we had was on the difference between local desktop display in TS RemoteApp vs just having a full desktop to the Terminal Server. One issue that came up was that as a RemoteApp, you can't run other applications.
Well, that is not actually true. If you think that, then a TS RemoteApp has the ability to be an attack vector for you. What do I mean? Well below is a screen shot of what happens if you hit CTRL-ALT-ENTER with the cursor focused on the RemoteApp window (in this case MS Paint running remotely):
At this point, you can run Task Manager.... then hit File->Run and run something else. In my case, I showed a few people afterwards how to start cmd and start exploring the network. Now, you will only have the privileges of the user account logged in as, but it is still something you have to be careful about. If you think a RemoteApp bundle prevents access to other application sor the network... you are wrong.
So is this bad? No. Is it really an attack vector? No. You just need to understand that when allowing ANY type of Terminal Services based access, you have to restrict the policies and access accordingly. No matter if its local or remote. Running a TS RemoteApp bundle of Office will display on the local desktop, but is STILL running on the Terminal Server. So it will be browsing the network the Terminal Server is connected to as the local net. It will also browse your own drives mapped via tsclient. So you have to remember that.
Hope thats useful. A TS RemoteApp bundle does NOT mean you won't have access to the TS desktop when displaying remotely on your personal desktop. And that's not a bad thing. TS Remote App is a convenient way to extend the workspace to your local machine, anywhere in the world. No pun intended. That's its power... and the benefit. Great remote productivity enhancement in Windows Server 2008. Use it. (Safely of course)
So Susan has been on my case about Twitter for some time now. In a recent round table we were recording she "beat me up" about it, and tonight on IM we had a good discussion about the REAL vs PERCEIVED risks in Twitter.
Susan's biggest complaint is that security minded individuals shouldn't be blindly recommending the use of Twitter without educating the user on 'safe-twittering'. I would say that same logic exists for setting up web pages, blogs and the use of social networking sites like Facebook.
She stepped that up a bit tonight when she blogged her discomfort in the fact the RSA Conference was recommending Twitter as well.
So in an effort to stop spreading the FUD about Twitter insecurity, I wanted to share some of my thoughts through a quick set of safe twittering rules.
@DanaEpp's 5 Rules of Safer Twittering
- Never share information in a tweet that you wouldn't share with the world. You can never expect to take it back once it's on the Internet. Even though you can delete a tweet, 3rd party clients may still have it archived. If you feel you want to share private thoughts through Twitter, consider using a "Private Account" and limited it to only people you trust and want to share with. Of course, remember nothing prevents your friends from sharing your tweets with the world. So never share private information on Twitter. Ever. it's just easier that way.
- There is no assurance that a Twitter account is the person you believe it is. Deal with it. Anyone can register an account if it doesn't already exist. As a real world example, for some time @cnnbrk was NOT an official CNN account, even though most of the Twitter world thought it was. It wasn't until recently that CNN bought the account from James Cox (the account holder) for an undisclosed amount of money. Another example is the fact that one of Susan's Twitter accounts was actually created by a fellow SBS MVP, and not actually her. :-)
- Never click on links in a tweet, unless you trust the URL. If unsure, don't click! The worms that were used to attack Twitter came from people getting users to go to profile pages etc that they had control over for some interesting script attacks. With only 140 chars, its common to "shorten" the URL. Which means you might be clicking on a link blind. That's fine. But only trust shortened URLs that can be previewed BEFORE you go to it. As an example, my recommendation is to use something like TinyURL. However, here is the trick. When you create a TinyURL, use the preview mode. As an example, if you want to send someone to my blog you can use http://tinyurl.com/silverstr to go directly. However, if you use http://preview.tinyurl.com/silverstr it will stop at TinyURL.com and let the user SEE the link before they actually get to it. That is much safer. If using TweetDeck, select TinyURL as the provider, and when it creates the shortened url, simply add "preview." in front of "tinyurl.com".
- Use a 3rd party Twitter client instead of using the Twitter.com website directly. I am a fan of TweetDeck and Twitterfon, but there are tons of different clients out there. Why? It is the lesser of two security evils as it relates to web based attacks in Twitter. Most clients have ways to reduce or turn off linking, prevents the script attacks in profile viewing and generally is just an easier environment to stay protected in. Are these clients free of attack? Of course not. But its another layer of defense. Of course... you need to have trust in your client. But that's a story for another day ;-)
- You never know who is following you. Remember that. As you use Twitter more and more, you never know who might be watching. I recently had someone who has been trying to get an interview with me who follows me on Twitter, knew where I was having coffee one day because of a tweet I wrote (and it's geotag) and ended up coming down to confront me with his resume. Which was inappropriate in my books. But my own fault. I wasn't too concerned.. but it definitely gave me pause when considering my daughter uses Twitter and could be as easily found. Nothing like the potential of being stalked. GeoTagging makes it way to easy to find you. Remember that.
Look, Twitter is addictive. Simple. Short. Fast. A great way to see the thoughts of others you might care about. Ultimately though... like any other Internet based technology it has the potential to be abused... and put you at risk. No different than websites or blogs.
So be careful. Follow these rules and enjoy the conversation!
So John Bristowe, Developer Evangelist for Microsoft Canada will be hosting a Coffee and Code event in Vancouver tomorrow from 9 to 2 at Wicked Cafe. Come join him and fellow Microsoft peers Rodney Buike and Damir Bersinic as they sit and share their knowledge over a cup of joe.
I will be there too, and will be available if anyone wants to talk about secure coding, threat modeling with the SDL TM or if you want to talk about integrating AuthAnvil strong authentication into your own applications or architectures
I do hope to see some of you there. And if I don't... I will be seeing you at #energizeIT right?
What: Coffee and Code in Vancouver
When: April 8th, 2009 from 9am - 2pm
Where: Wicked Cafe - 861 Hornby Street (Vancouver)
Recently I had an interesting experience that I think is noteworthy. Something worth sharing with my peers and circle of influence.
Last month I had the experience of accidentally backing up 7GB of MP3 data to our offsite data backup provider, i365 (formally eVault). I have been a happy customer for YEARS of their service. It works as intended, and quite frankly I rarely even think about them as it "just works". But I got nailed with a HUGE overage bill that blew away my DR budget. It was not a pretty site. Half a year's budget spent in two months.
I gave them a call to find out what was going on, and their Customer Service technical team was awesome in helping me to identify the culprit. We quickly stopped that folder from being backed up any more, and then created a filter to prevent media file extensions from ever being backed up again. This wasn't a standard web based, email only support option. It was a real, living, breathing geek who knew how their software works. And that is important to me... it let me address the issue in a pretty fast manner and move on to more interesting pursuits.
However, the fact remained that it was an expensive lesson on what NOT to do. I had overages of about $26/GB, which is insanely expensive by today's rates. Then again, it was a plan I was on from over 3 years ago. So I can't really blame them for that.
So I twittered in frustration. And Vlad Mazek over at Own Web Now sent me some information about his offerings, which from a cost perspective is way more inline with what a small business can afford. And ultimately I sent out the following twitter after learning about his services:
Holy cow. OwnWebNow offsite backup appears to be way better for small business than eVault.
Now from a social media perspective that might not mean much. But it had an interesting cascading effect worth noting. It seems management over at Seagate heard about the tweet. And it caused a meeting to be scheduled between myself, my eVault account manager and her director.
We had our conference call this morning. Talk about service! They listened to my concerns, and reviewed my account with them. Being with them for so many years, they wanted to keep my business and wanted to make things right. And from the action items from the meeting, it sounds like they will.
Our data needs have changed. We have doubled the amount of data we need to store offsite, and being hit with 4x overage charges isn't acceptable. They listened to the pain I have identitifed, and are addressing it with a new plan that is more inline with my needs and expectations. Guess what? It is going to cost me more money. Considerably more money than if I went with Own Web Now's service. However the difference is WORTH it to me, and although I haven't made a final decision yet... I am leaning heavily to stay with them. As a small business owner my loyalty is to my company and its bottom line. However, it is balanced with the costs of good technical support, and great customer service. Something Seagate/i365 has shown me today.
Customers matter. Without them, a software company is nothing. And it seems i365 get that. And it seems they listen to their customers on Twitter. That's just awesome. And that small gesture has probably secured my business for many years to come.
Although software security is still in its infancy, there are several methodologies like Microsoft SDL, OWASP CLASP and Cigital Touchpoints that are being adopted by more and more companies as part of their software security initiatives. Many share much of the common ground. A new study driven by Gary McGraw, Brian Chess and Sammy Migues investigated these common traits across several world leading companies, including Microsoft, Google, Adobe and EMC. Entitled the "Building Security In Maturity Model (BSIMM)", it helps to document a process of understanding and analyzing the real world experiences these companies have had in their software security development lifecycles.
I was privileged enough to get early access to this study and have to say over the last few weeks I have reflected on their skelton and see some real merit for using BSIMM in enterprise environments. It dictates a well rounded maturing process that can easily be adopted, even if in stages, to significantly increase the security effectiveness of a company's development process.
I highly recommend to take a look at it. You can download it here.
If there is one criticism I would have on BSIMM, it is that it has a requirement of scale. In the study, the median for a software security group (SSG) is 35 to 40 people, which is much too large for a majority of software companies out there. With the adoption of many agile software development paradigms, teams are getting smaller, not bigger, and are becoming isolated from main development teams. Especially if outsourced. And in actuality, it is my belief its these smaller teams that would benefit most from a software security development lifecycle that is better studied, understood and adopted. It's one of the reasons I like the Microsoft SDL process. It works with small teams of 5 or 10 people in the entire team.
However, that is no reason to dismiss BSIMM. From the 110 activities, although some simply don't fit, much does, irregarless of the size of the team. The requirement is that it be bought into... shifting culture and defining attitude. What was interesting to see was the top 10 activities seen through most companies studied. They include:
- Create evangelism role/internal marketing
- Create policy
- Provide awareness training
- Create/use material specific to company history
- Build/publish security features (authentication, role management, key management, audit/log, crypto, protocols)
- Have SSG lead review efforts
- Use automated tools along with manual review
- Integrate black box security tools into the QA process (including protocol fuzzing)
- Use external pen testers to find problems
- Ensure host/network security basics in place
Sounds like good advice to me.
I'd like to congratulate Gary and his peers on an interesting study. And I hope others in the industry will look up this research and see how they can adopt it to their own development processes. With any luck, we can see adaptations to allow this to work with considerably smaller teams.
I am down on the Microsoft campus for the week hanging with other security professionals. As I was coming to the building to listen to Steve Riley a few Security MVPs and I were talking about identity and I was surprised to hear that they didn't realize you can use a managed Information Card issued by Microsoft Live ID to provide single sign on to most of Microsoft's ecosystem. I use mine all the time, giving me single sign-on to MSDN, TechNet, Live, Connect etc.
Back in 2007 I actually blogged how to do this. But most people didn't realize that it has been rolled out to work with production services now, and has for some time (as a beta). So this blog is to provide a link on how to do this.
Rather simple.... just go here: https://login.live.com/beta/managecards.srf
Doing that will get you issued a managed card which you can use on XPSP3, Vista and Windows 7 workstations. When you sign up, you will now have an option to present an information card. It looks like this:
So if you ever find yourself complaining that you hate entering your Passport/LiveID password all the time when logging into Microsoft services, fear not. Use an Information Card and take advantage of single sign-on!...
This had me chuckling today...
OK, so everywhere I turn I am hearing people ridicule the changes in how UAC behaves in Windows 7. There is even proof of concept code that can turn off UAC without even being prompted.
For those with their heads in the sand, the story goes that in Windows 7 the default behaviour for UAC is to Notify me only when programs try to make changes to my computer and Dont notify me when I make changes to Windows settings. Because UAC is a "Windows setting", it means you can disable UAC without being prompted. And people believe that due to this behaviour, UAC is broken.
Now, I have to say I am personally not a fan of the new slider tuning functionality of UAC in Windows 7. When Windows Vista came out I applauded Microsoft's approach as it forced people to see the trust boundaries that were being broken in software applications that didn't run properly when using least privilege. After all, that is what UAC is designed to do. It enables people to run with the least privilege they need and to encourage applications to migrate to Standard User. Every time you see a prompt, you know which offending application could be written better. You turn to the vendor of that app and scream at them. And its working. Crispin Cowan, who works on the team at Microsoft focused on UAC, had an interesting chart in his presentations at PDC last year showing the prompt reduction people were seeing in the field by applications that were being fixed as Vista was being adopted. It's rather significant.
This is a positive aspect of UAC.
So if it's working, why would Microsoft change it? Well, it's a balance between security and usability. The goal of a technical safeguard providing value in any security aspect has to be weighed against the usability of the system. If it's too difficult for someone to adopt, people tend to find ways around it. This is exactly what happened with UAC in Vista. IT professionals (I will use the term LOOSELY here) were recommending people turn it off. Customers were complaining about the experience, and Microsoft listened to the feedback. Basically, in Windows 7 you got what you asked for *sigh*
Which is rather disappointing. I think Microsoft is making the right commercial decision, but not the right security one. I am not objecting to the new slider. Only to the default shipping state. Of course, I can easily get the level I want by adjusting BACK to high. Which is exactly what I have done in my Windows 7 installs to date. Let me be clear. If you want the same behaviour you have in Vista, you can get it by setting the slider to the highest setting. This gives you the right elevations with the secure desktop as you have it now.
I may personally object to Microsoft's decision because I don't find UAC a nuisance. I run as a Standard User. NOT as a protected admin in "administrator-approval" mode. I rarely see prompts, and my work desktop is Vista SP1, with a beta of SP2 on another. But as Susan is always so fond of saying, I am NOT a normal computer user.
So with an open mind, let's discuss an external view of Microsoft's decision. Vista got a bad wrap on UAC. Customers complained. So Microsoft changed the behaviour. Does this make us less secure?
Well first off, let's remember that UAC is NOT a technical safeguard to provide security boundaries. Mark Russinovich has shown on several occasions how to get around UAC. The goal of UAC is to help get us to a point where everyone runs as standard user by default, and that all software is written with that assumption. Crispin's stats shows it IS getting better as developers move towards this. However, UAC is NOT a mechanism to prevent applications from communicating with each other at different integrity levels on the same desktop. In other words, its trivial to send window messages from a user's desktop to an elevated process in the same desktop. And because of this, it means if someone can get you to run code on your system, it isn't your system anymore. This is Law #1 in the 10 Immutable Laws of Security.
Law #1: If a bad guy can persuade you to run his program on your computer, it's not your computer anymore
So if you have ever thought UAC prevented this, you are wrong. If you want that sort of isolation, the right way to do it is to use fast user switching and switch to a DIFFERENT desktop and log into an account with the appropriate privileges you need. This is what I do. If I NEED to do a bunch of admin things, I switch to an admin account and log in. If I need to browse places I am not confident in, I switch to a restricted account with almost no privileges. It's just safer that way. And for everything else I do on a daily basis, I use my Standard User account.
Now, let's reflect on the notion that the default settings make us less safe. Is that really true? Well in Vista, most people are told to turn OFF UAC. That's bad advice. But a reality we have seen. So in those situations, yes, UAC in Windows 7 is better. But what about those people that are used to UAC in Vista? Well, interestingly enough, what are we losing here? In Vista, most user's eyes gloss over when a UAC prompt shows up. Because few actually run as a Standard User, they confirm the prompt with a single click without even reading or understanding the message in front of them. So if we are making the choice for them on the most common prompts, is that a bad thing?
The fact this change exists in Windows 7 means Microsoft DID lower the bar for malware authors. It has gone from extremely difficult to disable UAC (but not impossible) to trivial. However, the malware has to first be executed. In most cases users will have had to install that software, in an elevated manner giving the malware a chance to run with higher privileges already.
Now before you go off and start pointing out malware can run directly in the browser without the user's knowledge or need to install anything, remember how IE works. When surfing online it runs in its own sandbox in the LOW integrity level. Microsoft called it "Protected Mode IE" (PMIE) for a reason. It is significantly more difficult to get hostile code on the Internet to run without your knowledge and be able to do things like modify UAC settings. Microsoft is working hard in IE8 to make that even more difficult. So as an attack vector, that is unlikely unless the attacker first breaches the IE sandbox. If they do that, you have bigger things to worry about than a UAC prompt going missing. :-)
So am I for Microsoft's default behaviour? Not for me personally. But I understand it. They have taken a look at the security risk and balanced it against the usability of the system. Remember, security is all about risk mitigation, NOT risk avoidance. They have looked at the real world experience of UAC in Vista and tweaked it to give the best experience, while being cognisant of the implications to security. For users coming from an XP world (which is most people since Vista adoption has been so slow), it means their experience will be better, and still more secure. And the chances of them following the guidance from the "IT professionals" who don't know any better to turn it off probably won't be followed.
So no, UAC is not broken in Windows 7. And it DOESN'T really make you less secure. But if you have concerns, stop using that damn adminstrator-approval mode account and move to use a Standard User account. And increase the UAC settings to High. You can always use the fast user switching and jump to a higher privileged desktop as needed. Of course, that shouldn't be a lot of times if you have software that actually works in a least privileged environment. And that is what is REALLY broken out there. Application vendors need to fix their stuff. Period. Which is what UAC was designed to help with!
Now here is a kewl video on YouTube about the history of the Internet.... and they didn't use Al Gores's name once!
Well, since everyone else is announcing it, I may as well follow the lemmings.
With many thanks to Microsoft. I have been awarded the distinction as an Enterprise Security MVP with developer focus for a 4th year. Much appreciated. It is truly an honour. I am in a category with several of my friends I have high respect for in the industry like Jesper and Alun. God help us all... we should have some fun again this year at Summit.
Oh, and congrats to Dan for being awarded this year. It's nice to have a friends who LIKE smartcards and crypto join the Enterprise Security MVPs :-)
Looking forward to seeing everyone at MVP Summit!
OK, an interesting thread is starting up in a blog post from Susan on "Do the Math folks" where she talks about the costs on In House vs Cloud based services.
I have to say that in my own opinion, she is missing a CRITICAL costing factor. And that's TCO. She's an ubergeek... so she doesn't apply costs to managing all that infrastructure since she just "does" it. And probably faster than most. (I think its all the clones she has) But here is the reality. Businesses can delegate responsibility for data management and protection to Cloud providers, reducing business risk and IT costs accordingly.
I don't usually talk about my business here, but let me explain what I mean. I own a small software company. I don't have a dedicated IT team to manage our infrastructure. I have to do it. I'm the ubergeek. Which is rather sad, since I have better things to be doing, like using my time in revenue generating pursuits. The reality is that every time I have to deal with a new patch, an update or an IT disaster I have to drop everything to manage it.
Now the smart readers will say to hire external competant staff. Maybe outsource it to an MSP. Well here is the nut. In my local area, I simply don't trust any of them. Few are competent enough to actually understand our risk profile, and can properly and securely manage the infrastructure. And the ones that can cost way more than they are worth. I see way too many going under right now, and I simply will not put my business at risk to MSPs that I can't trust.
Which is sad really... since I have many MSPs as customers. However, none of them are local to me... which means I can't get bodies in the office when I need it most. And the sensitivity of the information matters to me. So I won't just contract someone down in the States. Why? Patriot Act. Sorry people, you are NOT going to access our sensitive systems from a terminal that may fall under those provisions. And to boot, we have a cracker jack BC Privacy Commissioner who mandates it that way anyways.
And I sure as hell won't delegate it to a firm whose people are overseas. You nuts? The weakest link in security is the human factor. I will NOT trust it to IT people who are paid $5/hr and are happy to jump between companies like they are playing hopscotch.
Now enters firms like Own Web Now, Amazon S3, Force.com and even Microsoft's own Online Services. Here are companies tieing their business to the Cloud in one way or another. The idea of hosting critical services like Exchange, Sharepoint and CRM, and leaving the standard IT management to those companies makes sense. Why? Because they have deeper pockets with more incentive to ensure my services stay running. Reputation matters to them. They will be around after this recession. These are all valuable pieces to a more dynamic IT infrastructure that can make sense. The TCO factors in as less IT resources are wasted in dealing with the day to day mundane tasks. And they can be trusted.
However, I say that without finishing the statement. That should be that "they are trusted, and can be verified". In other words, as a business owner I can delegate IT management of critical services to them, but I better not abdocate responsibility. I have to ensure we have backups of that information. That we can recover from it. There is no absolutes here. No one will care more about my data then me. So I have to invest in ways to ensure its protected.
Which gets me to a side benefit of Cloud Computing. I can delegate responsibility of all the day to day tasks of managing the systems to these firms. I do that now with services from Salesforce.com and Own Web Now. However, I ensure that data is routinely backed up to another provider. Thats just being diligent.
Now one of the comments in Susan's thread was about the comfort of these people being able to look at that data. Ya, there is certain risk to that. Vlad could read all my mail. With the millions of emails that go through his network, he must have time to read my emails.
Let's get real, shall we? As part of this exercise though, let's say that is a risk to me. Then it would be my responsibility to secure it. That's what email encryption is for. But what about databases? We use SQL 2008, and use transparent database encryption. That prevents 20 year old IT guys at OWN who may be underpaid and have other interests in mind from detaching the database and moving it to their own workstations. We use DDL triggers and force access to certain data to REPORT to use immediately, allowing us to use human heuristics to assure we know who is accessing the information.
And of course, we use strong authentication and identity assurance to make sure we know only authorized staff inside and outside our network is accessing our systems and the data on it. TRUST BUT VERIFY. I really need to get a tattoo of that or something.
75% of all statistics are faked. Just like that one. We can make numbers say whatever we really want. But at the end of the day, each business owner has to weigh costs against risk. This SHOULDN'T be about the technology or technical safeguards. It should be the cost of aquisition and use of the data our businesses need. It's the data assets that matter. Not the systems that drive it.
Some will invest in that through Cloud based services. Others will demand it in house. But if you are going to have that debate, PLEASE include the full TCO discussion in the details. Otherwise you are simply comparing apples to oranges, and neither are good in a Christmas cake.
If there is one thing we can learn from the past, it is that we are doomed to repeat our failures if we ignore it.
The reciprocal is also true. If we reflect on our experiences properly, there is a lot we can learn from it.
In the world of designing secure code, this becomes more apparant as we see Microsoft's SDL process mature. Next year will hit the 10 year mark where threat modeling, as a formalized methodology, has been going on at Microsoft. In its infancy in 1999 only a few people at Microsoft were engaged in this. Now... every team is. And we can see the benefits of that when we reflect on the less critical bugs that have been reported in the last few years.
Today I cam across an interesting paper by Adam Shostack titled "Experiences Threat Modeling at Microsoft". Adam has ownership of the SDL Tool I mentioned earlier this month, and it was interesting to see his approach in explaining how Microsoft is focusing on threat modeling, and how the design is for normal developers, and NOT for security experts alone.
This is a critical point. Where in the past Microsoft has indicated it was important to do bainstorming sessions with a security expert in tow, now ownership of the model comes from the developers themselves. By designing tools that allow the architects, designers and developers to all know how to look at threats to their systems, everyone benefits. It's more cost effective. And it raises the bar as everyone thinks more critical about the security impacts of the code.
This is rather refreshing. And a good, quick read. So check out the paper.