I’ve been mulling this one over for a while. And honestly, after a post to an internal global mail list at work putting forward my ideas, I’ve come to realise there are at least two camps in information security:
- Those who aim via various usual suspects to protect things
- Those who aim via various often controversial and novel means to protect people
Think about this for one second. If your compliance program is entirely around protecting critical data assets, you’re protecting things. If your infosec program is about reducing fraud, building resilience, or reducing harmful events, you’re protecting people, often from themselves.
I didn’t think my rather longish post, which brought together the ideas of the information swarm (it’s there, deal with it), information security asymmetry and pets/cattle (I rather like this one), would land with the heavy thud akin to 95 bullet points nailed to the church door.
So I started thinking – why do people still promulgate stupid policies that have no bearing on evidence? Why do people still believe that policies, standards, and spending squillions on edge and end point protection when it is trivial to break it?
Faith in our dads and grand dads that their received wisdom is appropriate for today’s conditions.
“Si Dieu n’existait pas, il faudrait l’inventer” Voltaire
(Often mis-translated as “if religion did not exist, it would be necessary to create it”, but close enough for my purposes)
I think we’re seeing the beginning of infosec religion, where it is not acceptable to speak up against unthinking enforcement of hand me down policies like 30 day password resets or absurd password complexity, where it is impossible to ask for reasonable alternatives when you attempt to rule out the imbecilic alternatives like basic authentication headers.
We cannot expect everyone using IT to do it right, or have high levels of operational security. Folks often have a quizzical laugh at my rather large random password collection and use of virtual machines to isolate Java and an icky SOE. But you know what? When Linked In got pwned, I had zero fears that my use of Linked In would compromise anything else. I had used a longish random password unique to Linked In. So I could take my time to reset that password, safe in the knowledge that even with the best GPU crackers in existence, the heat death of the universe would come before my password hash was cracked. Plenty of time. Fantastic … for me, and I finally get a pay off for being so paranoid.
But… I don’t check my main OS every day for malware I didn’t create. I don’t check the insides of my various devices for evil maid MITM or keyloggers. Let’s be honest – no one but the ultra paranoid do this, and they don’t get anything done. But infosec purists expect everyone to have a bleached white pristine machine to do things – or else the user is at fault for not maintaining their systems.
We have to stop protecting things and start protecting humans, by creating human friendly, resilient processes with appropriate checks and balances that do not break as soon as a key logger or network sniffer or more to the point, some skill is brought to bear. Security must be agreeable to humans, transparent (as in plain sight as well as easy to follow), equitable, and the user has to be in charge of their identity and linked personas, and ultimately their preferred level of privacy.
I am nailing my colors to the mast – we need to make information technology work for humans. It is our creature, to do with as we want. This human says “no“
A colleague of mine just received one of those awful marketing calls where the vendor rings *you* and demands your personal information “for privacy reasons” before continuing with the phone call.
As a consumer, you must hang up to avoid being scammed. End of story. No exceptions.
Even if the business has a relationship with the consumer, asking them to prove who they are is wildly inappropriate. Under no circumstances should a customer be required to provide personal information to an unknown caller. It must be the other way around – the firm must provide positive proof of who they are! And by calling the client, the firm already knows who the client is, so there’s no reason for the client to prove who they are.
As a business, you are directly hurting your bottom line and snatching defeat from the jaws of victory by asking your customers to prove their identity to you.
This is about the dumbest marketing mistake ever – many customers will automatically assume (correctly in my view) that the campaign is a scam, and repeatedly hang up, thus lowering goal completion rates and driving up the cost of sales. Thus this dumb move can cost a company millions in opportunity costs in the form of:
- wasted marketing (hundreds of dropped customer contacts for every “successful” completed sale),
- increase fraud to the consumer and ultimately the business when customers reject fraudulent transactions
- lose thousands if not hundreds of thousands of customers, and their ongoing and future revenue if they lose trust in the firm or by the firm’s lack of fraud prevention, cause them to suffer fraud by allowing scammers to easily harvest PII from the customer base and misuse it
Customers hate moving businesses once they have settled on a supplier of choice, but if you keep on hassling them the wrong way, they do up and leave.
So if any of you are in marketing or are facing pressure from the business to start your call script by asking for personally identifying information from your customers, you are training your customers to become victims of phishing attacks, which will cost you millions of dollars and many more lost customers than you’ll ever gain from doing the right thing.
It’s more than just time to change this very, very, very bad habit.
Everything now works.
The quick version is:
- Create a new Fedora 18 VM
- Do not use “Easy install”
- Disable 3D acceleration in the VM settings (Command-E) prior to starting the install, otherwise you get a spinning idle cursor and no action upon first boot
- Install as you see fit. I use a 64 bit F18, 4 GB of RAM (my Mac has 16 GB of RAM), 100 GB partition, and 4 virtual processors. YMMV. General rule of thumb is that 1 GB is plenty for a casual Gnome desktop, but if you’re using it for software development like me, then the more the better.
- Once installed, update all software using
sudo yum update
- and then reboot.
- This gets you to kernel 3.8.4-202.fc18.x86_64 at the time of writing. This is currently incompatible with VMware tools 9.2.2-893683, but there are known patches (see below) until a VMWare Tools update comes along.
- Let’s install the build tool chain needed to complete VMware Tool installation
sudo yum install patch kernel-devel kernel-headers gcc make
- Reboot just to make sure we’re all good
- Click “Install VMware Tools”
- Unzip the tar ball to ~/Downloads/vmware-tools-distrib/
- Visit this page http://erikbryantology.blogspot.com.au/2013/03/patching-vmware-tools-in-fedora-18.html
- Download the vmware9.k3.8rc4.patch and vmware9.compat_mm.patch patches, and if you feel like you’re not a patch wizard, the shell scripts that apply those patches, too.
- Follow the directions in the above page to patch the source to the VMCI and HGFS file system sharing modules
- Now build the patched VMware tools
- There should be no compilation ERRORS (there will be a couple of warnings, particularly for 64 bit architectures)
- Test drag and drop file sharing, cut n paste, screen resizing, shared folders, and printing to a MacOS X printer.
- If it’s all good, shutdown the VM
- Go to Settings for the VM (Command-E)
- In Display, re-enable 3D acceleration
- Start up the VM
At this point, you should have access to accelerated 3D within the VM and all VMWare Toolbox features.
Responsible disclosure is a double edged sword. The faustian bargain is I keep my mouth shut to give you time to fix the flaws, not ignore me. I would humbly suggest that it is very relevant to your interests when a top security researcher submits a business logic flaw to you that is trivially exploitable with just iTunes or a browser requiring no actual hacking skills.
If anyone knows anyone at Apple, please re-share or forward this post, and ask them to review my rather detailed description of my rather simple method of exploiting the Apple ID password reset system I submitted over six months ago with so far zero response beyond an automated reply. The report tracking number is #221529179 submitted August 12, 2012.
My issue should be fixed along with the other issues before they let password reset back online with my flaw intact.
I have a bit of a code review job at the moment. It’s a large code base, and you all know what that means. LOTS OF RAM! So I got me a 16 GB upgrade. Then I found that I could only allocate 8 GB to a VM in VMWare Fusion. So here’s how to scan a big chunk of code with minimal pain:
The default VM disk size for a Easy Installed Ubuntu is 20 GB, with 8 GB of swap. WTF. So don’t use Easy Install as you’ll run out of disk space doing a scan of a moderate sized application. I expanded mine to 80 GB after it was all installed, but if you are smart, unlike me, do it when you first build the system.
To add more than 8GB to a VM in VMWare Fusion, allocate 8192 MB (the maximum amount) in the GUI whilst the VM is shutdown, open the package contents of the VM by right clicking the VM (I’m on a Mac, so if you rename a folder foobar.vmwarevm, it becomes a package automagically). Find the VMX file. Open it carefully in a decent editor (vi or TextWrangler or TextMate) – there is magic here and if you edit it wrong, your VM will not boot. Change memsize = “8192″ to say memsize = “12384″ and save it out. I wouldn’t go too close to your total memory size as you’ll start paging on the Mac, and that’s just pain. Boot the VM. Confirm you have enough memory!
First off, do not even try to do it within Audit Workbench. It will just fail.
Secondly, it seems that HP do not test the latest version of SCA on OpenSuse 12.2, which is a shame as I really liked OpenSuse. There’s no way to fix up the dependencies without using an unsafe (older) version of Java, so I gave it up.
Ubuntu, despite not being listed as a valid platform (CentOS, Red Hat, and OpenSuse are all listed as qualified), Ubuntu had a graphical installer compared to OpenSuse’s text only install. Alrighty, then.
Install Oracle Java 1.7 latest using the 64 bit JDK for Linux. I did it to /usr/local/java/ Weep for you now have a massive security hole installed.
Force Ubuntu to use that JVM using update alternatives:
sudo update-alternatives --install "/usr/bin/java" "java" "/usr/local/java/jdk1.7.0_15/bin/java" 1 sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/local/java/jdk1.7.0_15/bin/javac" 1 sudo update-alternatives --set java /usr/local/java/jdk1.7.0_15/bin/java sudo update-alternatives --set javac /usr/local/java/jdk1.7.0_15/bin/javac
I created the following in /etc/profile.d/java.sh
#!/bin/sh JAVA_HOME=/usr/local/java/jdk1.7.0_15 PATH=$PATH:$HOME/bin:$JAVA_HOME/bin export JAVA_HOME export PATH
Note that I did not tell Ubuntu about Java Web Start. If you want to keep your Ubuntu box yours, you will not let JWS anywhere near a browser. If you did this step, it’s best to delete javaws completely from your system to avoid any potential for drive by download trojans.
Install SCA as per HP’s instructions.
Now, you need to go hacking as HP for some reason still insist that 32 bit JVMs are somehow adequate. Not surprisingly, Audit Workbench pops up an exception as soon as you start it if you take no further action to make it work. So let’s fix that up.
I went and hacked JAVA_CMD in /opt/HP_Fortify/HP_Fortify_SCA_and_Apps_3.80/Core/private-bin/awb/productlaunch to be the following instead of the JRE provided by HP:
After that, Audit Workbench will run.
Now, let’s work on ScanWizard. ScanWizard the only way really to produce repeatable scans that work without running out of memory. So run a ScanWizard. It’ll create a shell file for you to edit. You need to make the following changes:
MEMORY="-Xmx6000M -Xms1200M -Xss96M " LAUNCHERSWITCHES="-64 "
There’s a space after -64. Without that it fails.
Then there’s bugs in the generated scan script that mean it would never work when using a 64 bit scan. It’s almost like HP never tested 64 bit scans on large code bases (> 4 GB to complete a scan). I struggle to believe that, especially as their on demand service is almost certainly using something very akin to this setup.
Change this bit of the scan shell script:
FILENUMBER=`$SOURCEANALYZER -b $BUILDID -show-files | wc -l` if [ ! -f $OLDFILENUMBER ]; then echo It appears to be the first time running this script, setting $OLDFILENUMBER to $FILENUMBER echo $FILENUMBER > $OLDFILENUMBER else OLDFILENO=`cat $OLDFILENUMBER` DIFF=`expr $OLDFILENO "*" $FILENOMAXDIFF` DIFF=`expr $DIFF / 100` MAX=`expr $OLDFILENO + $DIFF` MIN=`expr $OLDFILENO - $DIFF` if [ $FILENUMBER -lt $MIN ] ; then SHOWWARNING=true; fi if [ $FILENUMBER -gt $MAX ] ; then SHOWWARNING=true; fi if [ $SHOWWARNING == true ] ; then
FILENUMBER=`$SOURCEANALYZER $MEMORY $LAUNCHERSWITCHES -b $BUILDID -show-files | wc -l` if [ ! -f $OLDFILENUMBER ]; then echo It appears to be the first time running this script, setting $OLDFILENUMBER to $FILENUMBER echo $FILENUMBER > $OLDFILENUMBER else OLDFILENO=`cat $OLDFILENUMBER` DIFF=`expr $OLDFILENO "*" $FILENOMAXDIFF` DIFF=`expr $DIFF / 100` MAX=`expr $OLDFILENO + $DIFF` MIN=`expr $OLDFILENO - $DIFF` SHOWWARNING=false if [ $FILENUMBER -lt $MIN ] ; then SHOWWARNING=true; fi if [ $FILENUMBER -gt $MAX ] ; then SHOWWARNING=true; fi if [ $SHOWWARNING = true ] ; then
Yes, there’s an uninitialized variable AND a syntax error in a few lines of code. Quality. Two equals signs (==) are not valid sh/bash/dash syntax, so obviously that was well tested before release! Change it to = or -eq and you should be golden.
After that, just keep an eye out for out of memory errors and any times you notice it saying “Java command not found”. To open a large FPR file may require bumping up Audit Workbench’s memory. I had to with a 141 MB FPR file. YMMV.
I was heartened to find out that someone was given grant money for a study that demonstrates that the fresh brains market in a zombie apocalypse would peter out after six months. Afterwards, the earth would be either empty (most likely) or a wasteland with few zombies.
So that gave me an idea. Gresham’s Law, crudely stated, says that bad money drives out good money. My thesis is that the market for high quality security assessments (=”good money” e.g. skilled manual review) is being driven out by the prevalence of low / unknown quality security assessments (=”bad money”) in a rush to the bottom in terms of fees. This correlates with an increase in business loss as attackers stop putting up alert boxes and start stealing (brains) from the population.
So is there any hope? Do we need hope? Could we have a market in the post-trust Internet?
Let’s have a thought experiment – what would the Internet look like post zombie apocalypse (or if you’re Paul Fenwick, a post singularity AI overlord who turns out not to be our friend). Could commerce exist and in what form if we totally (and I mean totally debased) the security market to the point that there is no trust on the Internet?
What would that look like for traders in an all lolcats world?
In my view, the signs of a post-zombie apocalypse are:
- The market would mainly consist of small unregulated trades, much like drug deals today you see on TV crime shows;
- There will be a limited market for large trades, and large trades would be highly regulated in a walled garden;
- There is very limited to no trust;
- Trades would be done in places that are not particularly consumer friendly (ether “friendly” to mall owners like Amazon or Etsy, or dark places like the Silk Road);
- There would likely be an arms race of sorts between the main actors in the market, such as targeted phishes of oppressed ethnic minorities or other outgroups;
- There would be little to no enforcement as there’s basically no detection;
- There would be minimal to no proactive security measures being undertaken, where this “technology” is essentially unknown the market or deeply hoarded by those who actually know.
In my view, much of the signs are starting to crop up now, with the dark net market of malware, infected machines, and illicit substances traded for virtual currencies.
We are at a turning point for trust. Either we must support the market in a way that punishes weak security or bad money, and rewards leading security practices, or we give up and embrace the smaller and more diverse dark market. There’s still money to be made – for some – in the dark market.
What do you think the future of the security market looks like?
I have taken the step of finally splitting the cut-n-paste import from my blog at Advogato into the days they actually occurred. All that content was here previously, but in some cases bunched together over many thousands of lines in single massive multi-month postings.
Some early permalinks are gone, but that’s okay, you can search for the content. The content I’m talking about dates back more than ten years.
This post is not in Latin, but essentially a call to the Information Security industry to end policies based upon argumentum ad antiquitatem, which includes:
- Password change, complexity and length policies and standards that simply don’t make sense in the light of research and tools that show that we can crack ALL passwords in a reasonable time. It’s time to move on to two factor authentication, alternatives such as OAuth2 (i.e. Facebook/Twitter/G+ integration) or Mozilla Account Manager, and random long passphrases for all accounts.
- “Security” shared knowledge questions and answers. These are commonly used to “prove” that you have sufficient evidence of identity to resume access to an account. We see these actively exploited continuously now. Unfortunately, most familiies including ex-spouses have sufficient knowledge of the identity and access to the person’s identity documents that such questions, no matter how phrased (like “What was your favorite childhood memory”), are simply unsafe at any speed as more than ONE person knows or can guess the correct answer.
- That requiring authentication is enough to eliminate risks in your application. Identity and access management is important, but it’s only part of the picture.
- That enforcing SSL or access through a firewall is enough to eliminate risks in your application. Confidentiality and integrity of connection is vital, especially if you’re not doing it today, but it’s only part of the picture.
- That obfuscation is enough to deter hackers. Client side code is so beguiling and the UX is often amazing, but it’s not safe. Business decisions must be enforced at a trusted location, and there’s little business reason to do this twice. So let’s get that balance right.
What are some of your pet “argumentum ad antiquitatem” fallacies?
So in a fit of security through obscurity, I renamed my WordPress database tables and promptly broke WordPress with a highly informative “You do not have sufficient permissions to access this page.” error message when accessing wp-admin.
Changing the prefix is easiest done with a new installation, but my installation dates from the very first versions of WordPress when the dinosaurs roamed. Due to WordPress’s design, changing the database prefix (‘wp_’) is not as straightforward as you would expect.
In this exercise, we’re going to change from the default “wp_” prefix to “foo_”. If you’re doing this for security through obscurity reasons, don’t use “foo_”, use something you made up. Trust me, my prefix is NOT “foo_”. In wp-config.php, change:
$table_prefix = 'wp_';
$table_prefix = 'foo_';
Once you’ve saved the file, your WordPress installation is now officially broken. Move fast!
Rename your tables
use myblog show tables
and for each of the tables you see there, do this:
rename table wp_options to foo_options;
At this point, your blog will now be viewable again, but you will not be able to administrate it. Accessing /wp-admin/ will say “You do not have sufficient permissions to access this page.”
Fix WordPress Brain Damage
Let’s go ahead and fix that for you:
UPDATE foo_usermeta SET meta_key = REPLACE(meta_key,'wp_','foo_'); UPDATE foo_options SET option_name = REPLACE(option_name,'wp_','foo_');
I always live in hope that just one day, the folks over at Fedora will actually have a pain free VMWare installation. Not to be. Here’s how to do it with the minimal gnashing of teeth.
Bugs that get you before anything else
On VMWare Fusion 5, currently Fedora 18 x86_64 Live DVD’s graphical installer will boot and then gets stuck at a blue GUI screen if you have 3D acceleration turned on (which is the default if you choose Linux / Fedora 64 bit).
- Virtual Machine -> Settings -> Display -> disable 3D acceleration.
We’ll come back to this after the installation of VMWare Tools
Installing Fedora 18 in VMWare Fusion / VMWare Workstation 8
The installation is pretty straight forward … as long as you can see it.
The only non-default choice I’d like you to change is to set your standard user up to be in the administrators group (it’s a checkbox during installation). Being in the administrators group allows sudo to run. If you don’t want to do this, drop sudo from the beginning of all of the commands below, and use “su -” to get a root shell instead.
The new graphical installer still has a few bugs:
- Non-fatal – On the text error message screen (Control-Alt-F2) there’s an error message from grub2 (still!) about grub2 file not found /boot/grub2/locale/en.mo.gz. This will not prevent installation, so just ignore it for now (which the Fedora folks have for a couple of releases!). Go back to the live desktop screen by using Control-Alt-F1
- PITA – Try not to move the installer window offscreen as it’s difficult to finish the installation if even a little off screen. If you get stuck, press tab until you hit the “Next” button – or just reboot and start again
Once you have Fedora installed, login and open a terminal window (Activities -> type in “Terminal”)
sudo yum update sudo reboot sudo yum install kernel-devel kernel-headers gcc make sudo reboot
Fix missing kernel headers
At least for now, VMware Tools 9.2.2 build-893683 will moan about a path not found error for the kernel headers. Let’s go ahead and fix that for you:
sudo cp /usr/include/linux/version.h /lib/modules/`uname -r`/build/include/linux/
NB: The backtick (`) executes the command “uname -r” to make the above work no matter what your kernel version is.
NB: Some highly ranked and well meaning instructions want you to install the x86_64 or PAE versions of kernel devel or kernel headers when trying to locate the correct header files. This is not necessary for the x86_64 kernel on Fedora 18, which I am assuming you’re using as nearly everything released by AMD or Intel for the last six years is 64 bit capable. Those instructions might be relevant to your interests if you are using the 32 bit i686 version or PAE version of Fedora 18.
Mount VMWare Tools
Make sure you have the latest updates installed in VMWare before proceeding!
- Virtual Machine -> Install VMWare Tools
Fedora 18 mounts removable media in a per-user specific location (/run/media/<username>/<volume name>), so you need to know your username and the volume name
Build VMWare Tools
Click on Activities, and type Terminal
tar zxf /run/media/`whoami`/VMware\ Tools/VMw*.tar.gz cd vmware-tools-distrib sudo ./vmware-install.pl
Make sure everything compiled okay, and if so, restart:
NB: The backtick (`) executes the command “whoami” to make the above work no matter what your username is.
No 3D Acceleration oh noes!1!! Install Cinnamon or Mate
Now, all the normal VMWare Tools will work. Unfortunately, after all the faffing about, I didn’t manage working 3D acceleration. I ended up installing something a bit lighter than Gnome 3.6, which requires hardware 3D acceleration.
- Activities -> Software -> Packages -> Cinnamon for a more modern desktop appearance or
- Activities -> Software -> Packages -> MATE for old school Gnome 2 desktop appearance
- From the session pull down, change across to Cinnamon or Mate and log back in
This might be telling folks to suck eggs, but if you are doing secure code reviews and your development skills relate to type 1 JSP and Struts 1.3, it’s really time you got stuck into volunteering to code for open source projects that use modern technologies. There’s heaps of code projects at OWASP that need help, including helping me with code snippets that are in a modern paradigm.
I don’t care what technologies you choose, but your code reviews will not be using Type 1 JSPs or Struts for that much longer – if at all. Time to upskill!
- Ajax anything. Particularly jQuery and node.js. GWT is on the wane, but still useful to know
- Spring Security, Spring Framework and particularly Spring Web Flow are essential skills for any code reviewer doing commercial enterprise code reviews
- .NET 4.5 and Azure are killer skills at the moment, particularly as Windows 2012 has just been released. Honestly, there is a good market to be a specialist just in this language and framework set, as it’s literally too large for any one person to know.
- Essential co-skills: Continuous integration, agile methodologies (you have updated your services to be agile aligned, right?), and writing security unit tests so your customers can repro the issues you find.
It’s important to realise that good code reviewers can code, if poorly. Poor code reviewers don’t code and have never written a thing. Don’t be a bad code reviewer.
I do not suggest Python, Ruby on Rails, or PHP as these are rare skills in the enterprise market, but if they scratch your itch, go for it, but be aware that these skills do not translate out to commercial code review jobs. The fanbois of these languages and frameworks will hate on me, but honestly, there’s no reason to learn these languages except for the occasional job here and there, and if you’re any good at the list above, PHP in particular is easy to pick up. Fair warning, it’s a face palm storm waiting to happen.
If you are participating in the OWASP Developer Guide, I want to have another status meeting Friday next week.
Friday 2nd November 1300 UTC
Saturday 3rd November 0000 AEDST (my time zone)
Come be my friend on Google+, and ask to be in my OWASP Guide circle. This circle can participate in the Hangout.
Hope to see you there!
I take the train between Marshall and Southern Cross Station, a terminus station with 14 or 15 platforms and hundreds of V/Line country, suburban and bus services daily. I had an app that worked (the old MetLink app). That wasn’t stellar, but it worked well enough that I didn’t need to get a paper timetable.
So imagine my continuing frustration that the most basic of use cases just doesn’t work in the complete re-write of the new app:
I cannot find my station when standing on the station platform (!) using location search or by searching for the station in the default “Trains” mode the app comes in from the AppStore.
It cannot find the terminus of all V/Line services – Southern Cross Station. I’m serious. In “Train” mode, you cannot search for V/Line services or stations. In “V/Line” mode, Southern Cross is not even a station (!!). You cannot find it by clicking on “Find my location” icon whilst in the station (!), and you cannot choose it from the map, and you cannot search for it. Epic fail of all epic fails. It’s like the PTV app designers chose not to walk the 40 m from their office block to the biggest and busiest station in all of Victoria and test it out.
Modality. It’s nearly impossible to work out you can change the mode of transport you’re looking up by clicking the word “Trains” at the bottom of the screen. I am catching a “train”, but not the default type of “train”. Who knew? The thought that there are multiple types of trains obviously never entered to PTV’s UX designers. There’s no button shape or indicator, it’s just in a button bar by itself, which usually means that there are no other choices.
Honestly, PTV need to test their apps:
- You should be able to find all the services within 500 m of where you are standing. Just list them all and let the filter function narrow things down in one or two keytaps.
- You should be able to find ANY station or service or transport mode via text search. It’s just not that hard. There should be no difference between a regional bus, a metropolitan tram, an intercity V/Line service, or a station or bus stop. List ‘em all, and let the filter work its magic in a few keystrokes.
- Get rid of modes. I don’t think of modes and I use at least two every day. Free up that wasted screen real estate and replace it with a search function that works across all modes, and services.
- You should be able to view a line’s entire timetable with no more than two or three clicks. Timetables -> scroll to the timetable or tap in enough to narrow things down -> voila. It’s not rocket science. Allow it to be a favorite.
- Planning a multi-mode trip is not rocket science. This is just not possible with the current PTV app.
- The old app had notifications for the services / lines you were interested in. Please bring it back. This feature may actually be in the PTV app – I simply don’t know because I have not been able to find my station or the station at which I get off.
This app is terrible. It must be withdrawn.
The Developer Guide is a huge project; it will be over 400 pages once completed, hopefully written by tens of authors from all over the world, and will hopefully become the last “big bang” update for the Guide.
The reality is our field is just too big to do big bang projects. We need to continuously update the Guide, and keep it watered and fresh. The Guide needs to become like a metaphorical 400 year old eucalypt, all twisty and turny, but continuously green and alive by the occasional rain fall, constant sunlight, and the occasional fire.
If you are a developer and have some spare cycles, you can make a difference to the Developer Guide. I need everyone who can to add at least a paragraph here and there. I will tend to your text and give it a single conceptual integrity and possibly a bit of a prune, but with many hands, we can get this thing done.
Why developers? Many security industry folks are NOT developers and can’t cut code. We need developers because we can teach you security, but it’s difficult to instil 3 years of post graduate study and a working life cutting code. I am not fussed about your platform. Great developers know multiple platforms, and have mastered at least a couple.
I am installing Atlassian’s Greenhopper agile project management tool to track the state of the OWASP Developer Guide 2013′s progress.
Feel free to join the mailing list, come say hi, and join in our next status meeting on Google+.
I’m glad to say that I’ve been accepted to speak at linux.conf.au 2013.
My talk is how to apply the OWASP Developer Guide 2013 to your open source project.
The Open Web Application Security Project (OWASP) Developer Guide 2013 is coming soon. In this presentation, you’ll learn about the major revision to one of the major open source code hardening resources.
The new version will encompass not only web applications (although that is its primary focus), but also general advice for all languages, frameworks, and applications through the use of re-usable architecture, designs, patterns and practices that you can adopt in your code with a bit of thought.
- The latest research in application security
- How to apply new patterns to eliminate hundreds of security flaws in your apps, such as the bizarre world of race conditions, distributed and parallel artefacts. Few apps can afford to be single threaded any more, and yet these subtle flaws are easily prevented if you only knew how
- Challenges of documenting bleeding edge practices in long lived documents
- How to pull together a global open source document team whilst holding down a day job
If you code web apps, or write apps that need to be secure, this is a must attend presentation!
Come see me! Challenge me! Make the Guide better for non-web apps!
Our industry suffers from a lack of women – women in senior positions are very rare, women who do what I do I can count on my hands without resorting to binary, and there are so few women coming out of Uni comp sci, developers and engineering courses that I can use and craft into my replacements.
IT needs women, and lots more of them, not only for the perspective they can bring to the table, but simply in the terrible truth that young women deciding on future careers at high school don’t see any future for themselves in our great industry, or any of the Science, Technology, Engineering or Medical research (STEM) subjects as a valid career choice.
There is so much to do to rectify this situation, not the least eliminating low hanging fruit, such as eliminating booth babes. I’ve heard lots of excuses, like:
- “It’s a legal job, I don’t see the problem” (this one makes the least amount of sense)
- “Everyone does it” (no, they most certainly don’t)
So when /. posts a story on what booth babes really think of us leering at them, you know it’s going to be a stinky disgusting mess, but you have to try to convert the heathens in any case.
I’ve been a Slashdot irregular for years. In 1999, the /. “community” said some disgusting things about Richard Stevens, the author of some of the (still) best Unix and TCP/IP books. I stopped going there every day after that shameful episode. I’ve not posted there since 2010, but I have /. in my RSS feed.
I have removed that feed today and I will be deleting my account shortly.
Many of you know my very low opinion of IT vendors who use booth babes at trade shows.
Update: I found this comment to a similar post last year just a few minutes ago:
Thanks for making the main point clear, I want to chime in here as a woman and someone who has represented my company from very early on at trade shows (and does to this day). In the telecom industry in particular these booth babes run rampant, they literally provide you with a form when you register to exhibit asking if you want to hire models.
At one event a couple years ago, a guy came over to talk with our CTO (a guy) and I and said point blank to me, “do you have an ownership stake in the company? if not, at least you’ve got one foot in the door to marry this guy?” Nevermind that I’m wearing my wedding ring! All I could do was paint a “go F&%$ yourself” smile on my face and wait for him to leave. The things I would have liked to say, but it just wasn’t worth it in that context.
The problem is, most people don’t walk up to me expecting me to know about APIs, building applications, solving problems specific to their industry or use case, how supply chain works, or anything else important to their business. This is perpetuated by booth babes. How do I know? If I dress in a frumpy or slightly less feminine style, instead of my normal stylish heels and a skirt suit, I get a different reaction. If I wear skinny jeans and flats and a tshirt or hoodie, look my age (early 20s) and have a self-effacing air, they think “oh she’s a nerdy girl” and then they ask the real questions. PUH-LEASE.
If you are a vendor, I have a very strict, and very long standing rule – if you use booth babes, I either don’t recommend you to my clients, or I actively campaign against you, and I will never, ever buy from you again. Such vendors have lost more than a $1m in recommendations from me alone in the last 10 years, and I doubt I am alone in my opinion of such appalling, women hating sales tactics.
So fast forward to today. I logged in after a few days to see if my romantic idealization of early Slashdot met up with even 1999 Slashdot low life scum. I was saddened and disappointed. I lost my decade long ”excellent” karma rating to peer moderation, and it’s no surprise the peers at Slashdot hate women.
One of my posts had to get more than seven negative flamebait downward moderation clicks to get the score it finally received.
So let’s look at the quality gem of a reply that gets +5 moderation (errors in copy and paste I will leave to the troll, can’t even do that right):
“ook at my low user ID, I’ve been here for longer than some of you have been alive.”
No one cares. I’m probably the same age as you but I don’t go around pointing it out as if it somehow adds extra weight to the argument.
“I am literally white hot angry with whomever did it b”
You’ll get over it.
“f you have a daughter, I expect you’ll want her to be a geekgrrl. If you want that outcome, you will join me in boycotting booth babes.”
Actually if I had a daughter I’d let her do whatever she wanted. Unfortunately you obviously don’t realise it but you’re just another one of those self righteous prudish males who seem to think that women should only do the jobs YOU approve of. Newsflash pal – its the WOMEN who get to decide whether to do it , not people like you.
I suspect in another century you’d be at the pulpit foaming at the mouth and damning any woman who dared go out with an unmarried man or wear a short skirt or speak before a man gave her permission.
You know what – Fuck you and your kind.
From viol8, a 40-something troll programmer who lives and works somewhere in Europe (if he can be trusted to thump things into the post box), who comes across as an arrogant Australian or English ex pat. I can’t be arsed working out who he is any longer – he’s exactly like any number of the worthless women hating smegheads that infest slashdot.
It’s time to put /. out of its misery and terminal decline. It has been an irrelevant community for years, and now the cesspool is dead to me.
ajv (4061, ex-member /. 1997-2012)
Update: RSS feed – deleted. Twitter – unfollowed. Can I find how to delete my /. account, no I can’t. Help appreciated in the box below.
In his post “PCI’s Money Making Cash Cow“, Andrew Weidenhamer must have had a bad week of being challenged (or in his words, “bullied’) by an PCI DSS Internal Security Auditor (ISA). This is not acceptable, but QSA’s must accept that their advice is there to help the organization become compliant, not to provide a cash cow of their own nor to be unchallenged.
Not knowing the specifics of the background that led to this article, I have to assume that the ISA has pushed back on one or more of:
- Scope – this has traditionally been the QSA’s sole domain, and (uncharitably) they probably don’t want anyone else busting a move in their profitability zone.
- Interpretation of the meaning of various clauses. I wrote the OWASP Top 10 2007, which was incorporated in the PCI DSS. I find it highly amusing to hear some of the “meanings” attributed to what I wrote.
- Being forceful about adhering to the “intent” versus the “letter” of the PCI DSS. This is a problem where the standard has to be deliberately vague, but the Council should be open and honest about what they meant when they wrote it – do they mean a web app, or something else? The PCI DSS is highly focussed on web apps, not other apps. Trying to extend it is like extending a repair manual for a ship to a bus. They both have diesel engines, but you know it doesn’t work that way. Don’t force the issue if you don’t know.
Being in this space right now, I understand the issues here. There are several problems I hope the SSC will pick up and resolve in the next major overhaul of the standard.
- Make the meaning of “in scope” and “out of scope” a great deal more tightly defined. The biggest problem in my view is it’s far too easy to drag in unrelated systems in a cloud / virtualized / management environments. I’m all for a solid ring fence, but to think the only way to do it is by layer two firewalls is farcical at best and destructive of the Council’s reputation at worst. Firewalls have their place, but as part of a wider set of more than adequate other controls, such as strong authentication, authorization, auditing and escalation. Let’s put it this way, I do nearly all my penetration tests over SSL and through firewalls and in direct view of IDS’s, and I still manage to have a very, very good time. If firewalls are all you’ve got, we’ve got it very, very wrong.
- Leaving the QSA to determine the scope is inherently conflicted. They get a lot more money if they scope it conservatively (i.e as many of the requirements as possible, and as many systems as possible), and there’s a lot of risk if they scope it to be a minimal but to the letter of the standard. I strongly suggest SSC require tier one merchants hire two QSA’s, one to find the information out and set the scope, and one to assess the desired scope and systems. Or work just like the internal audit versus external audit functions in the financial world, where the ISA’s output is treated as trustworthy and evaluated from time to time. Is either method perfect? No, but it’s a lot less conflicted than the current situation.
- The glossary, the prioritized list, fact sheets, and PCI DSS for Dummes, what you heard on the community grapevine, or the guidelines ARE NOT the standard. They can be used to support an argument to do something in the spirit of the standard, but they are most certainly NOT the standard. QSA’s – please understand unless you demonstrate that your reason for a “not in place” actually is required by one of the in scope requirements, then it’s not required to be in place. Is it good idea? Almost certainly, but that’s a different standard.
- Many folks need and want an Attestation of Compliance … but at what cost? The process of working through not getting an AoC is almost completely off the reservation. Most folks don’t even think about this third way, but it’s actually fairly likely. If your activities are all about getting an AoC at all costs, PCI DSS has failed to achieve a good balance. There are places for a black and white compliance standard, and there are places for risk based assessments. If it’s going to cost you $25m to fix a $25 a year problem, that’s a terrible, terrible outcome. I hope the SSC addresses this in the future, as many folks going through PCI DSS compliance will need an AoC but can’t get one because their QSA has said no for the most minor of reasons.
- Make it easy for folks to ask questions directly to the council. Nearly all of the requirements are vague. One QSA might have been told one thing by the Council, and other has never come across it before, and you have two opinions, one right and one somewhat wrong. Too many times, an argument that goes on for weeks can be solved with a simple email to the Council. Channeling it through one side of the argument (the QSA) is inherently conflicted. Let’s be open and transparent in this process.
In my view, the best way to deal with a QSA is to be friendly, but make it known that you will challenge them in a collegiate way from time to time, and that there’s nothing personal about that challenge. The QSA may not understand the business or the technology, and they may have got it completely wrong.
On the other hand, you as an ISA or as a hiring company may not understand the intent or learnings of the Council, and need to get your house in order, which is far, far more likely.
PCI DSS does this in a very blunt, non risk assessed way. For the first time ever, someone with a bigger stick is holding you to account to do it the way you should have done it in the first place. There is simply NO EXCUSE for SQL injection or XSS in any app, let alone a payment app. However, so many of the requirements are vague and so open ended as to be nearly impossible to comply with unless you hoodwink the QSA. And that doesn’t serve the real purpose of this exercise.
QSA’s who fear going to every meeting with you are not going to offer good advice. They wont offer advice at all. It’s best to walk a very fine line between being friendly and learn all you can get from A to B in the best way possible that achieves credit card security, but don’t be so chummy that you find it hard to say “no” when you need to say “no”.
My rule of thumb is that if you’re having a difficult conversation with your acquirer when you should have been having a difficult conversation with your developers, your marketers, your business or the QSA, then you’ve done it wrong. PCI DSS is here to save your bacon, not be a speed bump. However, there is much to improve in the QSA engagement process, mainly in my view to advance true independence of QSAs.
I am moving over to using Fedora from Ubuntu as I am helping out with the OLPC XS (School Server) on XO laptop effort, which is Fedora based. Fedora 17, codename The Beefy Miracle (seriously), has just been released, so it’s time to update my Linux development workstations.
Installing Fedora 17 in VMWare Fusion / VMWare Workstation 8
There are two problems with Fedora 17 and VMWare at the moment:
- Fedora 17′s graphical installer on VMWare (and I’ve heard on Oracle VirtualBox too) does not show the Back / Next buttons. They are there and they work if you tab to them, but they are offscreen due to the video mode.
- The next problem is that “Linux Easy Install” doesn’t work in VMWare Tools build 8.8.3 in the Fedora 17 guest. At all. Don’t use it, or you’ll end up in dracut debug shell purgatory after a “dracut Warning: Unable to process initqueue” message. The rest of the instructions here gets you to where Easy install would have managed to get you anyway.
So to get through it, you can either run a text install that produces a very minimal install of Fedora (great for hardened servers like the XS!)
- Boot from the ISO
- Enter the troubleshooting menu
- Press tab to bring up the linuz boot arguments
- Delete the vesa arguments
- Type in “text” on the end of the command line
or just use the graphical install…
- If there’s only one control on screen, just press return and you can go to the next screen
- If there’s many controls on screen, press tab until the focus disappears, and then press tab one more time (moves from the hidden Back button to the hidden Next button) and then press return.
Updating Fedora to allow VMWare Tools compilation
Once you have Fedora installed, login and open a terminal window
sudo yum update sudo reboot sudo yum install kernel-devel kernel-headers gcc make
Installing VMWare Tools
Now you’re ready to install the VMWare Tools
- Virtual Machine -> Install VMWare Tools. Unfortunately, F17 now mounts CD ROMs in a user specific location (/run/media/<username>/<volume name>), so you need to know your username if the instructions below don’t work
- Open a terminal
tar zxf /run/media/`whoami`/VMware\ Tools/VMw*.tar.gz cd vmware-tools-distrib sudo ./vmware-install.pl
NB: With tools build 8.8.3, there will be compilation errors until an update for the tools is released by VMware, but enough modules will compile to allow you to use shared folders, have auto-resizing monitors, working cut-n-paste, and a few of the other things that make running the tools worthwhile. Drag and drop doesn’t work yet.
Logout and then Restart the system to enable the tools. If you’re text only,
will do the trick.
You are welcome!
Over at Sensepost Security, there’s a new blog entry wondering about Haroon Meer‘s talk “Penetration Testing Considered Harmful“. Those who know me know that I’ve had this view for a very long time. I’m sure you could find a few posts in this blog.
Security has to be a intrinsic element of every system, or else it will be insecure. Penetration testing as a sole activity and piece of assurance evidence makes security appear on the fringes of the development, something that you pass or fail, something to be commodotized, a box to be ticked, and ultimately ignored. Penetration testing as is done by most in our industry is incredibly harmful. It’s a waste of investment to most organizations, and they know it so they try to minimize wastage by minimizing the scope, the time, and poo-pooing the outcomes.
Penetration testing should be a part of a wider set of security activities, a verification of all that came before. All too often, we come across clients who want to do a one or two day test the day before go-live. They’ve done nothing else, and when you completely pwn them, they’re terribly surprised and upset.
We need to move on to make penetration testing the same as unit testing – a core part of the overall software engineering of every application.
Penetration testing should never be ill informed (zero knowledge tests are harmful and a WAFTAM for all concerned), and it should have access to source, the project, and all documentation. Otherwise, you’re wasting the client’s money up against the wall and acting unethically in my view.
Tests should come from the risk register maintained by the project (you do have one of those, right?), as well as the use cases (the little cards on the wall) as well as from the OWASP ASVS / Testing Guides. More focus must be made on access control testing and business logic testing.
Penetration testing has become vulnerability assessment – run a tool, drool, re-write the tool’s results into a report, deliver. No! Write selenium tasks and automate it. If you’re not automating your pentests, how can your customers repeat your work? Test for it? They should be taught how to do it.
Folks at consultancies will shriek away in horror at my suggestion, but getting embedded is actually a good thing. Instead of hearing from a client once in a blue moon, you’re integrated into the birth and growth of software. This is a huge win for our clients and the overall security of software.
It’s time to do some curating of the OWASP Developer Guide. This is where my tastes meet the community’s – what do you want in the Guide, and what do you want out of the guide?
As much as I want to be comprehensive, there is a real risk that a 800 page book would never be read. There ARE easter eggs in the Guide that no one has found or bothered to e-mail me about yet, so I know it’s not being read widely.
- What would you like to see IN the Guide? Why?
- What would you like to see OUT of the Guide? Why?
Let me know by June. I’ll be sure to share your thoughts with the Developer Guide mail list.