In the last few weeks, a prominent researcher, Dragos Ruiu (@dragosr) has put his neck out describing some interesting issues with a bunch of his computers. If his indicators of compromise are to be believed (and there is the first problem), we have a significant issue. The problem is the chorus of “It’s not real” “It’s impossible” “It’s fake” is becoming overwhelming without sufficient evidence one way or another. Why are so many folks in our community ready to jump on the negative bandwagon, even if they can’t prove it or simply don’t have enough evidence to say one way or another?
My issue is not “is it true” or “I think it’s true” or “I think it’s false”, it’s that so many info sec “professionals” are basically claiming:
- Because I personally can’t verify this issue is true, the issue must be false. QED.
This fails both Logic 101, class 1, and also the scientific method.
This is not a technical issue, it’s a people issue.
We must support all of our researchers, particularly the wrong ones. This is entirely obvious. If we eat our young and most venerable in front of the world’s media, we will be a laughing stock. Certain “researchers” are used by their journalist “friends” to say very publicly “I think X is a fool for thinking that his computers are suspect”. This is utterly wrong, foolhardy, and works for the click bait articles the J’s write and on their news cycle, not for us.
Not everybody is a viable candidate for having the sample. In my view, the only folks who should have a sample of this thing are those who have sufficient operational security and budget to brick and then utterly destroy at least two or twenty computers in a safe environment. That doesn’t describe many labs. And even then, you should have a good reason for having it. I consider the sample described to needing the electronic equivalent of Level PC4 bio labs. Most labs are not PC4, and I bet most of infosec computing labs are not anywhere near capable of hosting this sample.
Not one of us has all of the skills required to look at this thing. The only way this can be made to work is by working together, pulling together E.Eng folks with the sort of expensive equipment only a well funded organisation or a university lab might muster, microcontroller freaks, firmware folks, CPU microcode folks, USB folks, file system folks, assembly language folks, audio folks, forensic folks, malware folks, folks who are good at certain types of Windows font malware, and so on. There is not a single human being alive who can do it all. It’s no surprise to me that Dragos has struggled to get a reproducible but sterile sample out. I bet most of us would have failed, too.
We must respect and use the scientific method. The scientific method is well tested and true. We must rule out confirmation bias, we must rule out just “well, a $0.10 audio chip will do that as most of them are paired with $0.05 speakers and most of the time it doesn’t matter”. I actually don’t care if this thing is real or not. If it’s real, there will be patches. If it’s not real, it doesn’t matter. I do care about the scientific method, and it’s lack of application in our research community. We aren’t researchers for the most part, and I find it frustrating that most of us don’t seem to understand the very basic steps of careful lab work and repeating important experiments.
We must allow sufficient time to allow the researchers to collaborate and either have a positive or negative result, analyse their findings and report back to us. Again, I come back to our journalist “friends”, who can’t live without conflict. The 24 hour news cycle is their problem, not our problem. We have Twitter or Google Plus or conferences. Have some respect and wait a little before running to the nearest J “friends” and bleating “It’s an obvious fake”.
We owe a debt to folks like Dragos who have odd results, and who are brave enough to report them publicly. Odd results are what pushes us forward as an industry. Cryptoanalysis wouldn’t exist without them. If we make it hard or impossible for respected folks like Dragos to report odd results, imagine what will happen the next time? What happens if it’s someone without much of a reputation? We need a framework to collaborate, not to tear each other down.
Our industry’s story is not the story about the little boy who cried wolf. We are (or should be) more mature than a child’s nursery rhyme. Have some respect for our profession, and work with researchers, not sully their name (and yours and mine) by announcing before you have proof that something’s not quite right. If anything, we must celebrate negative results every bit as much as positive results, because I don’t know about you, but I work a lot harder when I know an app is hardened. I try every trick in the book, including the stuff that is bleeding edge as a virtual masterclass in our field. I bet Dragos has given this the sort of inspection that only the most ardent forensic researcher might have done. If he hasn’t gotten that far, it’s either sufficiently advanced to be indistinguishable from magic, or he needs help to let us understand what is actually there. I bet that few of us could have gotten as far as Dragos has.
To me, we must step back, work together as an industry – ask Dragos: “What do you need?” “How can we help?” and if that’s “Give me time”, then let’s step back and give him time. If it’s a USB circuit analyser or a microcontroller dev system and plus some mad soldering skills, well, help him, not tear him down. Dragos has shown he has sufficient operational security to research another 12-24 months on this one. We don’t need to know now, now, or now. We gain nothing by trashing his name.
Just stop. Stop trashing our industry, and let’s work together.
So I’m getting a lot of Twitter spam with links to install bad crap on my computer.
More than just occasionally, these DM’s are sent by folks in the infosec field. They should know better than to click unknown links without taking precautions.
So what do you need to do?
Simple. Follow these basic NIST approved rules:
Contain – find out how many of your computers are infected. If you don’t know how to do this, assume they’re all suspect, and ask your family’s tech support. I know you all know the geek in the family, as it’s often me.
Eradicate – Clean up the mess. Sometimes, you can just use anti-virus to clean it up, other times, you need to take drastic action, such as a complete re-install. As I run a Mac household with a single Windows box (the wife’s), I’m moderately safe as I have very good operational security skills. If you’re running Windows, it’s time for Windows 8, or if you don’t like Windows 8, Windows 7 with IE 10.
Recover – If you need to re-install, you had backups, right? Restore them. Get everything back the way you like it.
- Use the latest operating system. Windows XP has six months left on the clock. Upgrade to Windows 7 or 8. MacOS X 10.8 is a good upgrade if you’re still stuck on an older version. There is no reason not to upgrade. On Linux or your favorite alternative OS, there is zero reason not to use the latest LTS or latest released version. I make sure I live within my home directory, and have a list of packages I like to install on every new Linux install, so I’m productive in Linux about 20-30 minutes after installation.
- Patch all your systems with all of the latest patches. If you’re not good with this, enable automatic updates so it just happens for you automatically. You may need to reboot occasionally, so do so if your computer is asking you to do that. On Windows 8, it only takes 20 or so seconds. On MacOS X, it even remembers which apps and documents were open.
- Use a safer browser. Use IE 10. Use the latest Firefox. Use the latest Chrome. Don’t use older browsers or you will get owned.
- On a trusted device, preferably one that has been completely re-installed, it’s time to change ALL of your passwords as they are ALL compromised unless proven otherwise. I use a password manager. I like KeePass X, 1Password, and a few others. None of my accounts shares a password with any other account, and they’re all ridiculously strong.
- Protect your password manager. Make sure you have practiced backing up and restoring your password file. I’ve got it sprinkled around in a few trusted places so that I can recover my life if something bad was to happen to any single or even a few devices.
- Backups. I know, right? It’s always fun until all your data and life is gone. Backup, backup, backup! There are great tools out there – Time Capsule for Mac, Rebit for Windows, rsync for Unix types.
Learn and improve. It’s important to make sure that your Twitter feed remains your Twitter feed and in fact, all of your other accounts, too.
- Twitter has two factor authentication. Enable it and use it.
- Google has many forms of two factor authentication. Enable it and use it.
- Facebook has two factor authentication, login approvals. Enable it and use it.
- Apple has two factor authentication for iTunes / iCloud / iEtc. Enable it and use it.
- In fact, nearly everyone does, including your bank. Enable two factor authentication and use it.
I never use real data for questions and answers, such as my mother’s maiden name as that’s a public record, or my birth date, which like everyone else, I celebrate once per year and thus you could work it out if you met me even randomly at the right time of the year. These are shared knowledge questions, and thus an attacker can use that to bypass Twitter, Google’s and Facebook’s security settings. I either make it up or just insert a random value. For something low security like a newspaper login or similar, I don’t track these random values as I have my password manager to keep track of the actual password. For high value sites, I will record the random value to “What’s your favorite sports team”. It’s always fun reading out 25 characters of gibberish to a call centre in a developing country.
I might make a detailed assessment of the DM spam I’m getting, but honestly, it’s so amateur hour I can’t really be bothered. There is no “advanced” persistent threat here – these guys are really “why try harder?” when folks don’t undertake even the most basic of self protection.
Lastly – “don’t click shit“. If you don’t know the person or the URL seems hinky, don’t click it.
That goes double for infosec pros. You know better, or you will just after you click the link in Incognito / private mode. Instead, why not fire up that vulnerable but isolated XP throw away VM with a MITM proxy and do it properly if you really insist on getting pwned. If you don’t have time for that, don’t click shit.
This post is a last resort as I’ve had two comments rejected by the moderators at The Register, one of my favorite IT news websites.
Lewis Page is a regular contributor to the Register. For whatever reason, around 50% of his total output there is (willful mis-) reporting on various papers and research on climate science. Considering he (and for what it’s worth, myself) is not a climatologist, it’s very frustrating to see the “science” category tag on these articles. It wouldn’t be so bad if it was marked Opinion or Editorial, and that he wasn’t deliberately misrepresenting the observed facts, papers, research and scientists’ own words, but that he gives no truck at all to anything that doesn’t fit into his worldview.
Just to be utterly clear – among scientists who are trained in climatology, there is no doubt that we are in a rapidly changing world. Basically the question hasn’t been “if” there’s climate change for about 15-20 years, but “what does it mean to be on this planet in 10-20-50-100 years”. It’s up to us and the politicians to decide “what to do about it”. Even if climate change is not as bad as predicted (which actually, it’s worse than has been predicted), the actions we must take now are good for us and the planet:
- less air pollution == longer, heathier lives
- less water pollution == longer, healthier lives
- lower energy bills == more money for other things
- less wasteful consumption of a finite non-renewable resource == richer, more economically healthy future and longer production of things we can’t economically make without oil, like certain materials and medicines and so on
There is literally no downside to acting to curb emissions, but there’s a lot on the line if we don’t do something. Personally, I don’t think an ETS is the correct path as it’s a cheap way for the government to earn money and seen to be doing something – anything at all, but as it’s a derivative market, which has a colorful history of abuse (such as in Germany, where too many credits were issued undermining the market, and California, where traders essentially create artificial spikes in price to maximise profits and create artificial blackouts), but despite this, we must move on to the phase of our industrial planet.
I call on the Register to provide the scientific consensus view. Here’s my rejected comment in full.
It’s my long and fervent wish that the Register would stop publishing these opinion pieces, as I rather enjoy the “call a spade a f$&#ing spade” approach to almost all the other articles, reviews and IT news, which is rather let down by Mr Page’s long standing and regular missives on this topic.
In my opinion, these articles are not “science”, nor are they reasonable journalism, where the authors of the paper might be asked for a comment or an interview to get their side first hand. Mr Page can still have his opinion, but at least pay us the respect of writing about the researchers, paper or presentation in an unbiased way to allow us to compare Mr Page’s opinion with what they really wrote, demonstrated, observed or said.
At least pay us the respect of providing balanced coverage either by providing mainstream climate science coverage in the science category along with Mr Page’s opinion pieces and coverage, or by adding in right of reply, interviews and accurate coverage of what was actually written in the papers and research.
I’ve been mulling this one over for a while. And honestly, after a post to an internal global mail list at work putting forward my ideas, I’ve come to realise there are at least two camps in information security:
- Those who aim via various usual suspects to protect things
- Those who aim via various often controversial and novel means to protect people
Think about this for one second. If your compliance program is entirely around protecting critical data assets, you’re protecting things. If your infosec program is about reducing fraud, building resilience, or reducing harmful events, you’re protecting people, often from themselves.
I didn’t think my rather longish post, which brought together the ideas of the information swarm (it’s there, deal with it), information security asymmetry and pets/cattle (I rather like this one), would land with the heavy thud akin to 95 bullet points nailed to the church door.
So I started thinking – why do people still promulgate stupid policies that have no bearing on evidence? Why do people still believe that policies, standards, and spending squillions on edge and end point protection when it is trivial to break it?
Faith in our dads and grand dads that their received wisdom is appropriate for today’s conditions.
“Si Dieu n’existait pas, il faudrait l’inventer” Voltaire
(Often mis-translated as “if religion did not exist, it would be necessary to create it”, but close enough for my purposes)
I think we’re seeing the beginning of infosec religion, where it is not acceptable to speak up against unthinking enforcement of hand me down policies like 30 day password resets or absurd password complexity, where it is impossible to ask for reasonable alternatives when you attempt to rule out the imbecilic alternatives like basic authentication headers.
We cannot expect everyone using IT to do it right, or have high levels of operational security. Folks often have a quizzical laugh at my rather large random password collection and use of virtual machines to isolate Java and an icky SOE. But you know what? When Linked In got pwned, I had zero fears that my use of Linked In would compromise anything else. I had used a longish random password unique to Linked In. So I could take my time to reset that password, safe in the knowledge that even with the best GPU crackers in existence, the heat death of the universe would come before my password hash was cracked. Plenty of time. Fantastic … for me, and I finally get a pay off for being so paranoid.
But… I don’t check my main OS every day for malware I didn’t create. I don’t check the insides of my various devices for evil maid MITM or keyloggers. Let’s be honest – no one but the ultra paranoid do this, and they don’t get anything done. But infosec purists expect everyone to have a bleached white pristine machine to do things – or else the user is at fault for not maintaining their systems.
We have to stop protecting things and start protecting humans, by creating human friendly, resilient processes with appropriate checks and balances that do not break as soon as a key logger or network sniffer or more to the point, some skill is brought to bear. Security must be agreeable to humans, transparent (as in plain sight as well as easy to follow), equitable, and the user has to be in charge of their identity and linked personas, and ultimately their preferred level of privacy.
I am nailing my colors to the mast – we need to make information technology work for humans. It is our creature, to do with as we want. This human says “no“
A colleague of mine just received one of those awful marketing calls where the vendor rings *you* and demands your personal information “for privacy reasons” before continuing with the phone call.
As a consumer, you must hang up to avoid being scammed. End of story. No exceptions.
Even if the business has a relationship with the consumer, asking them to prove who they are is wildly inappropriate. Under no circumstances should a customer be required to provide personal information to an unknown caller. It must be the other way around – the firm must provide positive proof of who they are! And by calling the client, the firm already knows who the client is, so there’s no reason for the client to prove who they are.
As a business, you are directly hurting your bottom line and snatching defeat from the jaws of victory by asking your customers to prove their identity to you.
This is about the dumbest marketing mistake ever – many customers will automatically assume (correctly in my view) that the campaign is a scam, and repeatedly hang up, thus lowering goal completion rates and driving up the cost of sales. Thus this dumb move can cost a company millions in opportunity costs in the form of:
- wasted marketing (hundreds of dropped customer contacts for every “successful” completed sale),
- increase fraud to the consumer and ultimately the business when customers reject fraudulent transactions
- lose thousands if not hundreds of thousands of customers, and their ongoing and future revenue if they lose trust in the firm or by the firm’s lack of fraud prevention, cause them to suffer fraud by allowing scammers to easily harvest PII from the customer base and misuse it
Customers hate moving businesses once they have settled on a supplier of choice, but if you keep on hassling them the wrong way, they do up and leave.
So if any of you are in marketing or are facing pressure from the business to start your call script by asking for personally identifying information from your customers, you are training your customers to become victims of phishing attacks, which will cost you millions of dollars and many more lost customers than you’ll ever gain from doing the right thing.
It’s more than just time to change this very, very, very bad habit.
Everything now works.
The quick version is:
- Create a new Fedora 18 VM
- Do not use “Easy install”
- Disable 3D acceleration in the VM settings (Command-E) prior to starting the install, otherwise you get a spinning idle cursor and no action upon first boot
- Install as you see fit. I use a 64 bit F18, 4 GB of RAM (my Mac has 16 GB of RAM), 100 GB partition, and 4 virtual processors. YMMV. General rule of thumb is that 1 GB is plenty for a casual Gnome desktop, but if you’re using it for software development like me, then the more the better.
- Once installed, update all software using
sudo yum update
- and then reboot.
- This gets you to kernel 3.8.4-202.fc18.x86_64 at the time of writing. This is currently incompatible with VMware tools 9.2.2-893683, but there are known patches (see below) until a VMWare Tools update comes along.
- Let’s install the build tool chain needed to complete VMware Tool installation
sudo yum install patch kernel-devel kernel-headers gcc make
- Reboot just to make sure we’re all good
- Click “Install VMware Tools”
- Unzip the tar ball to ~/Downloads/vmware-tools-distrib/
- Visit this page http://erikbryantology.blogspot.com.au/2013/03/patching-vmware-tools-in-fedora-18.html
- Download the vmware9.k3.8rc4.patch and vmware9.compat_mm.patch patches, and if you feel like you’re not a patch wizard, the shell scripts that apply those patches, too.
- Follow the directions in the above page to patch the source to the VMCI and HGFS file system sharing modules
- Now build the patched VMware tools
- There should be no compilation ERRORS (there will be a couple of warnings, particularly for 64 bit architectures)
- Test drag and drop file sharing, cut n paste, screen resizing, shared folders, and printing to a MacOS X printer.
- If it’s all good, shutdown the VM
- Go to Settings for the VM (Command-E)
- In Display, re-enable 3D acceleration
- Start up the VM
At this point, you should have access to accelerated 3D within the VM and all VMWare Toolbox features.
Responsible disclosure is a double edged sword. The faustian bargain is I keep my mouth shut to give you time to fix the flaws, not ignore me. I would humbly suggest that it is very relevant to your interests when a top security researcher submits a business logic flaw to you that is trivially exploitable with just iTunes or a browser requiring no actual hacking skills.
If anyone knows anyone at Apple, please re-share or forward this post, and ask them to review my rather detailed description of my rather simple method of exploiting the Apple ID password reset system I submitted over six months ago with so far zero response beyond an automated reply. The report tracking number is #221529179 submitted August 12, 2012.
My issue should be fixed along with the other issues before they let password reset back online with my flaw intact.
I have a bit of a code review job at the moment. It’s a large code base, and you all know what that means. LOTS OF RAM! So I got me a 16 GB upgrade. Then I found that I could only allocate 8 GB to a VM in VMWare Fusion. So here’s how to scan a big chunk of code with minimal pain:
The default VM disk size for a Easy Installed Ubuntu is 20 GB, with 8 GB of swap. WTF. So don’t use Easy Install as you’ll run out of disk space doing a scan of a moderate sized application. I expanded mine to 80 GB after it was all installed, but if you are smart, unlike me, do it when you first build the system.
To add more than 8GB to a VM in VMWare Fusion, allocate 8192 MB (the maximum amount) in the GUI whilst the VM is shutdown, open the package contents of the VM by right clicking the VM (I’m on a Mac, so if you rename a folder foobar.vmwarevm, it becomes a package automagically). Find the VMX file. Open it carefully in a decent editor (vi or TextWrangler or TextMate) – there is magic here and if you edit it wrong, your VM will not boot. Change memsize = “8192″ to say memsize = “12384″ and save it out. I wouldn’t go too close to your total memory size as you’ll start paging on the Mac, and that’s just pain. Boot the VM. Confirm you have enough memory!
First off, do not even try to do it within Audit Workbench. It will just fail.
Secondly, it seems that HP do not test the latest version of SCA on OpenSuse 12.2, which is a shame as I really liked OpenSuse. There’s no way to fix up the dependencies without using an unsafe (older) version of Java, so I gave it up.
Ubuntu, despite not being listed as a valid platform (CentOS, Red Hat, and OpenSuse are all listed as qualified), Ubuntu had a graphical installer compared to OpenSuse’s text only install. Alrighty, then.
Install Oracle Java 1.7 latest using the 64 bit JDK for Linux. I did it to /usr/local/java/ Weep for you now have a massive security hole installed.
Force Ubuntu to use that JVM using update alternatives:
sudo update-alternatives --install "/usr/bin/java" "java" "/usr/local/java/jdk1.7.0_15/bin/java" 1 sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/local/java/jdk1.7.0_15/bin/javac" 1 sudo update-alternatives --set java /usr/local/java/jdk1.7.0_15/bin/java sudo update-alternatives --set javac /usr/local/java/jdk1.7.0_15/bin/javac
I created the following in /etc/profile.d/java.sh
#!/bin/sh JAVA_HOME=/usr/local/java/jdk1.7.0_15 PATH=$PATH:$HOME/bin:$JAVA_HOME/bin export JAVA_HOME export PATH
Note that I did not tell Ubuntu about Java Web Start. If you want to keep your Ubuntu box yours, you will not let JWS anywhere near a browser. If you did this step, it’s best to delete javaws completely from your system to avoid any potential for drive by download trojans.
Install SCA as per HP’s instructions.
Now, you need to go hacking as HP for some reason still insist that 32 bit JVMs are somehow adequate. Not surprisingly, Audit Workbench pops up an exception as soon as you start it if you take no further action to make it work. So let’s fix that up.
I went and hacked JAVA_CMD in /opt/HP_Fortify/HP_Fortify_SCA_and_Apps_3.80/Core/private-bin/awb/productlaunch to be the following instead of the JRE provided by HP:
After that, Audit Workbench will run.
Now, let’s work on ScanWizard. ScanWizard the only way really to produce repeatable scans that work without running out of memory. So run a ScanWizard. It’ll create a shell file for you to edit. You need to make the following changes:
MEMORY="-Xmx6000M -Xms1200M -Xss96M " LAUNCHERSWITCHES="-64 "
There’s a space after -64. Without that it fails.
Then there’s bugs in the generated scan script that mean it would never work when using a 64 bit scan. It’s almost like HP never tested 64 bit scans on large code bases (> 4 GB to complete a scan). I struggle to believe that, especially as their on demand service is almost certainly using something very akin to this setup.
Change this bit of the scan shell script:
FILENUMBER=`$SOURCEANALYZER -b $BUILDID -show-files | wc -l` if [ ! -f $OLDFILENUMBER ]; then echo It appears to be the first time running this script, setting $OLDFILENUMBER to $FILENUMBER echo $FILENUMBER > $OLDFILENUMBER else OLDFILENO=`cat $OLDFILENUMBER` DIFF=`expr $OLDFILENO "*" $FILENOMAXDIFF` DIFF=`expr $DIFF / 100` MAX=`expr $OLDFILENO + $DIFF` MIN=`expr $OLDFILENO - $DIFF` if [ $FILENUMBER -lt $MIN ] ; then SHOWWARNING=true; fi if [ $FILENUMBER -gt $MAX ] ; then SHOWWARNING=true; fi if [ $SHOWWARNING == true ] ; then
FILENUMBER=`$SOURCEANALYZER $MEMORY $LAUNCHERSWITCHES -b $BUILDID -show-files | wc -l` if [ ! -f $OLDFILENUMBER ]; then echo It appears to be the first time running this script, setting $OLDFILENUMBER to $FILENUMBER echo $FILENUMBER > $OLDFILENUMBER else OLDFILENO=`cat $OLDFILENUMBER` DIFF=`expr $OLDFILENO "*" $FILENOMAXDIFF` DIFF=`expr $DIFF / 100` MAX=`expr $OLDFILENO + $DIFF` MIN=`expr $OLDFILENO - $DIFF` SHOWWARNING=false if [ $FILENUMBER -lt $MIN ] ; then SHOWWARNING=true; fi if [ $FILENUMBER -gt $MAX ] ; then SHOWWARNING=true; fi if [ $SHOWWARNING = true ] ; then
Yes, there’s an uninitialized variable AND a syntax error in a few lines of code. Quality. Two equals signs (==) are not valid sh/bash/dash syntax, so obviously that was well tested before release! Change it to = or -eq and you should be golden.
After that, just keep an eye out for out of memory errors and any times you notice it saying “Java command not found”. To open a large FPR file may require bumping up Audit Workbench’s memory. I had to with a 141 MB FPR file. YMMV.
I was heartened to find out that someone was given grant money for a study that demonstrates that the fresh brains market in a zombie apocalypse would peter out after six months. Afterwards, the earth would be either empty (most likely) or a wasteland with few zombies.
So that gave me an idea. Gresham’s Law, crudely stated, says that bad money drives out good money. My thesis is that the market for high quality security assessments (=”good money” e.g. skilled manual review) is being driven out by the prevalence of low / unknown quality security assessments (=”bad money”) in a rush to the bottom in terms of fees. This correlates with an increase in business loss as attackers stop putting up alert boxes and start stealing (brains) from the population.
So is there any hope? Do we need hope? Could we have a market in the post-trust Internet?
Let’s have a thought experiment – what would the Internet look like post zombie apocalypse (or if you’re Paul Fenwick, a post singularity AI overlord who turns out not to be our friend). Could commerce exist and in what form if we totally (and I mean totally debased) the security market to the point that there is no trust on the Internet?
What would that look like for traders in an all lolcats world?
In my view, the signs of a post-zombie apocalypse are:
- The market would mainly consist of small unregulated trades, much like drug deals today you see on TV crime shows;
- There will be a limited market for large trades, and large trades would be highly regulated in a walled garden;
- There is very limited to no trust;
- Trades would be done in places that are not particularly consumer friendly (ether “friendly” to mall owners like Amazon or Etsy, or dark places like the Silk Road);
- There would likely be an arms race of sorts between the main actors in the market, such as targeted phishes of oppressed ethnic minorities or other outgroups;
- There would be little to no enforcement as there’s basically no detection;
- There would be minimal to no proactive security measures being undertaken, where this “technology” is essentially unknown the market or deeply hoarded by those who actually know.
In my view, much of the signs are starting to crop up now, with the dark net market of malware, infected machines, and illicit substances traded for virtual currencies.
We are at a turning point for trust. Either we must support the market in a way that punishes weak security or bad money, and rewards leading security practices, or we give up and embrace the smaller and more diverse dark market. There’s still money to be made – for some – in the dark market.
What do you think the future of the security market looks like?
I have taken the step of finally splitting the cut-n-paste import from my blog at Advogato into the days they actually occurred. All that content was here previously, but in some cases bunched together over many thousands of lines in single massive multi-month postings.
Some early permalinks are gone, but that’s okay, you can search for the content. The content I’m talking about dates back more than ten years.
This post is not in Latin, but essentially a call to the Information Security industry to end policies based upon argumentum ad antiquitatem, which includes:
- Password change, complexity and length policies and standards that simply don’t make sense in the light of research and tools that show that we can crack ALL passwords in a reasonable time. It’s time to move on to two factor authentication, alternatives such as OAuth2 (i.e. Facebook/Twitter/G+ integration) or Mozilla Account Manager, and random long passphrases for all accounts.
- “Security” shared knowledge questions and answers. These are commonly used to “prove” that you have sufficient evidence of identity to resume access to an account. We see these actively exploited continuously now. Unfortunately, most familiies including ex-spouses have sufficient knowledge of the identity and access to the person’s identity documents that such questions, no matter how phrased (like “What was your favorite childhood memory”), are simply unsafe at any speed as more than ONE person knows or can guess the correct answer.
- That requiring authentication is enough to eliminate risks in your application. Identity and access management is important, but it’s only part of the picture.
- That enforcing SSL or access through a firewall is enough to eliminate risks in your application. Confidentiality and integrity of connection is vital, especially if you’re not doing it today, but it’s only part of the picture.
- That obfuscation is enough to deter hackers. Client side code is so beguiling and the UX is often amazing, but it’s not safe. Business decisions must be enforced at a trusted location, and there’s little business reason to do this twice. So let’s get that balance right.
What are some of your pet “argumentum ad antiquitatem” fallacies?
So in a fit of security through obscurity, I renamed my WordPress database tables and promptly broke WordPress with a highly informative “You do not have sufficient permissions to access this page.” error message when accessing wp-admin.
Changing the prefix is easiest done with a new installation, but my installation dates from the very first versions of WordPress when the dinosaurs roamed. Due to WordPress’s design, changing the database prefix (‘wp_’) is not as straightforward as you would expect.
In this exercise, we’re going to change from the default “wp_” prefix to “foo_”. If you’re doing this for security through obscurity reasons, don’t use “foo_”, use something you made up. Trust me, my prefix is NOT “foo_”. In wp-config.php, change:
$table_prefix = 'wp_';
$table_prefix = 'foo_';
Once you’ve saved the file, your WordPress installation is now officially broken. Move fast!
Rename your tables
use myblog show tables
and for each of the tables you see there, do this:
rename table wp_options to foo_options;
At this point, your blog will now be viewable again, but you will not be able to administrate it. Accessing /wp-admin/ will say “You do not have sufficient permissions to access this page.”
Fix WordPress Brain Damage
Let’s go ahead and fix that for you:
UPDATE foo_usermeta SET meta_key = REPLACE(meta_key,'wp_','foo_'); UPDATE foo_options SET option_name = REPLACE(option_name,'wp_','foo_');
I always live in hope that just one day, the folks over at Fedora will actually have a pain free VMWare installation. Not to be. Here’s how to do it with the minimal gnashing of teeth.
Bugs that get you before anything else
On VMWare Fusion 5, currently Fedora 18 x86_64 Live DVD’s graphical installer will boot and then gets stuck at a blue GUI screen if you have 3D acceleration turned on (which is the default if you choose Linux / Fedora 64 bit).
- Virtual Machine -> Settings -> Display -> disable 3D acceleration.
We’ll come back to this after the installation of VMWare Tools
Installing Fedora 18 in VMWare Fusion / VMWare Workstation 8
The installation is pretty straight forward … as long as you can see it.
The only non-default choice I’d like you to change is to set your standard user up to be in the administrators group (it’s a checkbox during installation). Being in the administrators group allows sudo to run. If you don’t want to do this, drop sudo from the beginning of all of the commands below, and use “su -” to get a root shell instead.
The new graphical installer still has a few bugs:
- Non-fatal – On the text error message screen (Control-Alt-F2) there’s an error message from grub2 (still!) about grub2 file not found /boot/grub2/locale/en.mo.gz. This will not prevent installation, so just ignore it for now (which the Fedora folks have for a couple of releases!). Go back to the live desktop screen by using Control-Alt-F1
- PITA – Try not to move the installer window offscreen as it’s difficult to finish the installation if even a little off screen. If you get stuck, press tab until you hit the “Next” button – or just reboot and start again
Once you have Fedora installed, login and open a terminal window (Activities -> type in “Terminal”)
sudo yum update sudo reboot sudo yum install kernel-devel kernel-headers gcc make sudo reboot
Fix missing kernel headers
At least for now, VMware Tools 9.2.2 build-893683 will moan about a path not found error for the kernel headers. Let’s go ahead and fix that for you:
sudo cp /usr/include/linux/version.h /lib/modules/`uname -r`/build/include/linux/
NB: The backtick (`) executes the command “uname -r” to make the above work no matter what your kernel version is.
NB: Some highly ranked and well meaning instructions want you to install the x86_64 or PAE versions of kernel devel or kernel headers when trying to locate the correct header files. This is not necessary for the x86_64 kernel on Fedora 18, which I am assuming you’re using as nearly everything released by AMD or Intel for the last six years is 64 bit capable. Those instructions might be relevant to your interests if you are using the 32 bit i686 version or PAE version of Fedora 18.
Mount VMWare Tools
Make sure you have the latest updates installed in VMWare before proceeding!
- Virtual Machine -> Install VMWare Tools
Fedora 18 mounts removable media in a per-user specific location (/run/media/<username>/<volume name>), so you need to know your username and the volume name
Build VMWare Tools
Click on Activities, and type Terminal
tar zxf /run/media/`whoami`/VMware\ Tools/VMw*.tar.gz cd vmware-tools-distrib sudo ./vmware-install.pl
Make sure everything compiled okay, and if so, restart:
NB: The backtick (`) executes the command “whoami” to make the above work no matter what your username is.
No 3D Acceleration oh noes!1!! Install Cinnamon or Mate
Now, all the normal VMWare Tools will work. Unfortunately, after all the faffing about, I didn’t manage working 3D acceleration. I ended up installing something a bit lighter than Gnome 3.6, which requires hardware 3D acceleration.
- Activities -> Software -> Packages -> Cinnamon for a more modern desktop appearance or
- Activities -> Software -> Packages -> MATE for old school Gnome 2 desktop appearance
- From the session pull down, change across to Cinnamon or Mate and log back in
This might be telling folks to suck eggs, but if you are doing secure code reviews and your development skills relate to type 1 JSP and Struts 1.3, it’s really time you got stuck into volunteering to code for open source projects that use modern technologies. There’s heaps of code projects at OWASP that need help, including helping me with code snippets that are in a modern paradigm.
I don’t care what technologies you choose, but your code reviews will not be using Type 1 JSPs or Struts for that much longer – if at all. Time to upskill!
- Ajax anything. Particularly jQuery and node.js. GWT is on the wane, but still useful to know
- Spring Security, Spring Framework and particularly Spring Web Flow are essential skills for any code reviewer doing commercial enterprise code reviews
- .NET 4.5 and Azure are killer skills at the moment, particularly as Windows 2012 has just been released. Honestly, there is a good market to be a specialist just in this language and framework set, as it’s literally too large for any one person to know.
- Essential co-skills: Continuous integration, agile methodologies (you have updated your services to be agile aligned, right?), and writing security unit tests so your customers can repro the issues you find.
It’s important to realise that good code reviewers can code, if poorly. Poor code reviewers don’t code and have never written a thing. Don’t be a bad code reviewer.
I do not suggest Python, Ruby on Rails, or PHP as these are rare skills in the enterprise market, but if they scratch your itch, go for it, but be aware that these skills do not translate out to commercial code review jobs. The fanbois of these languages and frameworks will hate on me, but honestly, there’s no reason to learn these languages except for the occasional job here and there, and if you’re any good at the list above, PHP in particular is easy to pick up. Fair warning, it’s a face palm storm waiting to happen.
If you are participating in the OWASP Developer Guide, I want to have another status meeting Friday next week.
Friday 2nd November 1300 UTC
Saturday 3rd November 0000 AEDST (my time zone)
Come be my friend on Google+, and ask to be in my OWASP Guide circle. This circle can participate in the Hangout.
Hope to see you there!
I take the train between Marshall and Southern Cross Station, a terminus station with 14 or 15 platforms and hundreds of V/Line country, suburban and bus services daily. I had an app that worked (the old MetLink app). That wasn’t stellar, but it worked well enough that I didn’t need to get a paper timetable.
So imagine my continuing frustration that the most basic of use cases just doesn’t work in the complete re-write of the new app:
I cannot find my station when standing on the station platform (!) using location search or by searching for the station in the default “Trains” mode the app comes in from the AppStore.
It cannot find the terminus of all V/Line services – Southern Cross Station. I’m serious. In “Train” mode, you cannot search for V/Line services or stations. In “V/Line” mode, Southern Cross is not even a station (!!). You cannot find it by clicking on “Find my location” icon whilst in the station (!), and you cannot choose it from the map, and you cannot search for it. Epic fail of all epic fails. It’s like the PTV app designers chose not to walk the 40 m from their office block to the biggest and busiest station in all of Victoria and test it out.
Modality. It’s nearly impossible to work out you can change the mode of transport you’re looking up by clicking the word “Trains” at the bottom of the screen. I am catching a “train”, but not the default type of “train”. Who knew? The thought that there are multiple types of trains obviously never entered to PTV’s UX designers. There’s no button shape or indicator, it’s just in a button bar by itself, which usually means that there are no other choices.
Honestly, PTV need to test their apps:
- You should be able to find all the services within 500 m of where you are standing. Just list them all and let the filter function narrow things down in one or two keytaps.
- You should be able to find ANY station or service or transport mode via text search. It’s just not that hard. There should be no difference between a regional bus, a metropolitan tram, an intercity V/Line service, or a station or bus stop. List ‘em all, and let the filter work its magic in a few keystrokes.
- Get rid of modes. I don’t think of modes and I use at least two every day. Free up that wasted screen real estate and replace it with a search function that works across all modes, and services.
- You should be able to view a line’s entire timetable with no more than two or three clicks. Timetables -> scroll to the timetable or tap in enough to narrow things down -> voila. It’s not rocket science. Allow it to be a favorite.
- Planning a multi-mode trip is not rocket science. This is just not possible with the current PTV app.
- The old app had notifications for the services / lines you were interested in. Please bring it back. This feature may actually be in the PTV app – I simply don’t know because I have not been able to find my station or the station at which I get off.
This app is terrible. It must be withdrawn.
The Developer Guide is a huge project; it will be over 400 pages once completed, hopefully written by tens of authors from all over the world, and will hopefully become the last “big bang” update for the Guide.
The reality is our field is just too big to do big bang projects. We need to continuously update the Guide, and keep it watered and fresh. The Guide needs to become like a metaphorical 400 year old eucalypt, all twisty and turny, but continuously green and alive by the occasional rain fall, constant sunlight, and the occasional fire.
If you are a developer and have some spare cycles, you can make a difference to the Developer Guide. I need everyone who can to add at least a paragraph here and there. I will tend to your text and give it a single conceptual integrity and possibly a bit of a prune, but with many hands, we can get this thing done.
Why developers? Many security industry folks are NOT developers and can’t cut code. We need developers because we can teach you security, but it’s difficult to instil 3 years of post graduate study and a working life cutting code. I am not fussed about your platform. Great developers know multiple platforms, and have mastered at least a couple.
I am installing Atlassian’s Greenhopper agile project management tool to track the state of the OWASP Developer Guide 2013′s progress.
Feel free to join the mailing list, come say hi, and join in our next status meeting on Google+.
I’m glad to say that I’ve been accepted to speak at linux.conf.au 2013.
My talk is how to apply the OWASP Developer Guide 2013 to your open source project.
The Open Web Application Security Project (OWASP) Developer Guide 2013 is coming soon. In this presentation, you’ll learn about the major revision to one of the major open source code hardening resources.
The new version will encompass not only web applications (although that is its primary focus), but also general advice for all languages, frameworks, and applications through the use of re-usable architecture, designs, patterns and practices that you can adopt in your code with a bit of thought.
- The latest research in application security
- How to apply new patterns to eliminate hundreds of security flaws in your apps, such as the bizarre world of race conditions, distributed and parallel artefacts. Few apps can afford to be single threaded any more, and yet these subtle flaws are easily prevented if you only knew how
- Challenges of documenting bleeding edge practices in long lived documents
- How to pull together a global open source document team whilst holding down a day job
If you code web apps, or write apps that need to be secure, this is a must attend presentation!
Come see me! Challenge me! Make the Guide better for non-web apps!
Our industry suffers from a lack of women – women in senior positions are very rare, women who do what I do I can count on my hands without resorting to binary, and there are so few women coming out of Uni comp sci, developers and engineering courses that I can use and craft into my replacements.
IT needs women, and lots more of them, not only for the perspective they can bring to the table, but simply in the terrible truth that young women deciding on future careers at high school don’t see any future for themselves in our great industry, or any of the Science, Technology, Engineering or Medical research (STEM) subjects as a valid career choice.
There is so much to do to rectify this situation, not the least eliminating low hanging fruit, such as eliminating booth babes. I’ve heard lots of excuses, like:
- “It’s a legal job, I don’t see the problem” (this one makes the least amount of sense)
- “Everyone does it” (no, they most certainly don’t)
So when /. posts a story on what booth babes really think of us leering at them, you know it’s going to be a stinky disgusting mess, but you have to try to convert the heathens in any case.
I’ve been a Slashdot irregular for years. In 1999, the /. “community” said some disgusting things about Richard Stevens, the author of some of the (still) best Unix and TCP/IP books. I stopped going there every day after that shameful episode. I’ve not posted there since 2010, but I have /. in my RSS feed.
I have removed that feed today and I will be deleting my account shortly.
Many of you know my very low opinion of IT vendors who use booth babes at trade shows.
Update: I found this comment to a similar post last year just a few minutes ago:
Thanks for making the main point clear, I want to chime in here as a woman and someone who has represented my company from very early on at trade shows (and does to this day). In the telecom industry in particular these booth babes run rampant, they literally provide you with a form when you register to exhibit asking if you want to hire models.
At one event a couple years ago, a guy came over to talk with our CTO (a guy) and I and said point blank to me, “do you have an ownership stake in the company? if not, at least you’ve got one foot in the door to marry this guy?” Nevermind that I’m wearing my wedding ring! All I could do was paint a “go F&%$ yourself” smile on my face and wait for him to leave. The things I would have liked to say, but it just wasn’t worth it in that context.
The problem is, most people don’t walk up to me expecting me to know about APIs, building applications, solving problems specific to their industry or use case, how supply chain works, or anything else important to their business. This is perpetuated by booth babes. How do I know? If I dress in a frumpy or slightly less feminine style, instead of my normal stylish heels and a skirt suit, I get a different reaction. If I wear skinny jeans and flats and a tshirt or hoodie, look my age (early 20s) and have a self-effacing air, they think “oh she’s a nerdy girl” and then they ask the real questions. PUH-LEASE.
If you are a vendor, I have a very strict, and very long standing rule – if you use booth babes, I either don’t recommend you to my clients, or I actively campaign against you, and I will never, ever buy from you again. Such vendors have lost more than a $1m in recommendations from me alone in the last 10 years, and I doubt I am alone in my opinion of such appalling, women hating sales tactics.
So fast forward to today. I logged in after a few days to see if my romantic idealization of early Slashdot met up with even 1999 Slashdot low life scum. I was saddened and disappointed. I lost my decade long ”excellent” karma rating to peer moderation, and it’s no surprise the peers at Slashdot hate women.
One of my posts had to get more than seven negative flamebait downward moderation clicks to get the score it finally received.
So let’s look at the quality gem of a reply that gets +5 moderation (errors in copy and paste I will leave to the troll, can’t even do that right):
“ook at my low user ID, I’ve been here for longer than some of you have been alive.”
No one cares. I’m probably the same age as you but I don’t go around pointing it out as if it somehow adds extra weight to the argument.
“I am literally white hot angry with whomever did it b”
You’ll get over it.
“f you have a daughter, I expect you’ll want her to be a geekgrrl. If you want that outcome, you will join me in boycotting booth babes.”
Actually if I had a daughter I’d let her do whatever she wanted. Unfortunately you obviously don’t realise it but you’re just another one of those self righteous prudish males who seem to think that women should only do the jobs YOU approve of. Newsflash pal – its the WOMEN who get to decide whether to do it , not people like you.
I suspect in another century you’d be at the pulpit foaming at the mouth and damning any woman who dared go out with an unmarried man or wear a short skirt or speak before a man gave her permission.
You know what – Fuck you and your kind.
From viol8, a 40-something troll programmer who lives and works somewhere in Europe (if he can be trusted to thump things into the post box), who comes across as an arrogant Australian or English ex pat. I can’t be arsed working out who he is any longer – he’s exactly like any number of the worthless women hating smegheads that infest slashdot.
It’s time to put /. out of its misery and terminal decline. It has been an irrelevant community for years, and now the cesspool is dead to me.
ajv (4061, ex-member /. 1997-2012)
Update: RSS feed – deleted. Twitter – unfollowed. Can I find how to delete my /. account, no I can’t. Help appreciated in the box below.
In his post “PCI’s Money Making Cash Cow“, Andrew Weidenhamer must have had a bad week of being challenged (or in his words, “bullied’) by an PCI DSS Internal Security Auditor (ISA). This is not acceptable, but QSA’s must accept that their advice is there to help the organization become compliant, not to provide a cash cow of their own nor to be unchallenged.
Not knowing the specifics of the background that led to this article, I have to assume that the ISA has pushed back on one or more of:
- Scope – this has traditionally been the QSA’s sole domain, and (uncharitably) they probably don’t want anyone else busting a move in their profitability zone.
- Interpretation of the meaning of various clauses. I wrote the OWASP Top 10 2007, which was incorporated in the PCI DSS. I find it highly amusing to hear some of the “meanings” attributed to what I wrote.
- Being forceful about adhering to the “intent” versus the “letter” of the PCI DSS. This is a problem where the standard has to be deliberately vague, but the Council should be open and honest about what they meant when they wrote it – do they mean a web app, or something else? The PCI DSS is highly focussed on web apps, not other apps. Trying to extend it is like extending a repair manual for a ship to a bus. They both have diesel engines, but you know it doesn’t work that way. Don’t force the issue if you don’t know.
Being in this space right now, I understand the issues here. There are several problems I hope the SSC will pick up and resolve in the next major overhaul of the standard.
- Make the meaning of “in scope” and “out of scope” a great deal more tightly defined. The biggest problem in my view is it’s far too easy to drag in unrelated systems in a cloud / virtualized / management environments. I’m all for a solid ring fence, but to think the only way to do it is by layer two firewalls is farcical at best and destructive of the Council’s reputation at worst. Firewalls have their place, but as part of a wider set of more than adequate other controls, such as strong authentication, authorization, auditing and escalation. Let’s put it this way, I do nearly all my penetration tests over SSL and through firewalls and in direct view of IDS’s, and I still manage to have a very, very good time. If firewalls are all you’ve got, we’ve got it very, very wrong.
- Leaving the QSA to determine the scope is inherently conflicted. They get a lot more money if they scope it conservatively (i.e as many of the requirements as possible, and as many systems as possible), and there’s a lot of risk if they scope it to be a minimal but to the letter of the standard. I strongly suggest SSC require tier one merchants hire two QSA’s, one to find the information out and set the scope, and one to assess the desired scope and systems. Or work just like the internal audit versus external audit functions in the financial world, where the ISA’s output is treated as trustworthy and evaluated from time to time. Is either method perfect? No, but it’s a lot less conflicted than the current situation.
- The glossary, the prioritized list, fact sheets, and PCI DSS for Dummes, what you heard on the community grapevine, or the guidelines ARE NOT the standard. They can be used to support an argument to do something in the spirit of the standard, but they are most certainly NOT the standard. QSA’s – please understand unless you demonstrate that your reason for a “not in place” actually is required by one of the in scope requirements, then it’s not required to be in place. Is it good idea? Almost certainly, but that’s a different standard.
- Many folks need and want an Attestation of Compliance … but at what cost? The process of working through not getting an AoC is almost completely off the reservation. Most folks don’t even think about this third way, but it’s actually fairly likely. If your activities are all about getting an AoC at all costs, PCI DSS has failed to achieve a good balance. There are places for a black and white compliance standard, and there are places for risk based assessments. If it’s going to cost you $25m to fix a $25 a year problem, that’s a terrible, terrible outcome. I hope the SSC addresses this in the future, as many folks going through PCI DSS compliance will need an AoC but can’t get one because their QSA has said no for the most minor of reasons.
- Make it easy for folks to ask questions directly to the council. Nearly all of the requirements are vague. One QSA might have been told one thing by the Council, and other has never come across it before, and you have two opinions, one right and one somewhat wrong. Too many times, an argument that goes on for weeks can be solved with a simple email to the Council. Channeling it through one side of the argument (the QSA) is inherently conflicted. Let’s be open and transparent in this process.
In my view, the best way to deal with a QSA is to be friendly, but make it known that you will challenge them in a collegiate way from time to time, and that there’s nothing personal about that challenge. The QSA may not understand the business or the technology, and they may have got it completely wrong.
On the other hand, you as an ISA or as a hiring company may not understand the intent or learnings of the Council, and need to get your house in order, which is far, far more likely.
PCI DSS does this in a very blunt, non risk assessed way. For the first time ever, someone with a bigger stick is holding you to account to do it the way you should have done it in the first place. There is simply NO EXCUSE for SQL injection or XSS in any app, let alone a payment app. However, so many of the requirements are vague and so open ended as to be nearly impossible to comply with unless you hoodwink the QSA. And that doesn’t serve the real purpose of this exercise.
QSA’s who fear going to every meeting with you are not going to offer good advice. They wont offer advice at all. It’s best to walk a very fine line between being friendly and learn all you can get from A to B in the best way possible that achieves credit card security, but don’t be so chummy that you find it hard to say “no” when you need to say “no”.
My rule of thumb is that if you’re having a difficult conversation with your acquirer when you should have been having a difficult conversation with your developers, your marketers, your business or the QSA, then you’ve done it wrong. PCI DSS is here to save your bacon, not be a speed bump. However, there is much to improve in the QSA engagement process, mainly in my view to advance true independence of QSAs.