Did a clean install of Mavericks on my test mac-mini. Things to be aware of for next time:
xcode-select --install sudo ln -s /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/ /Applications/Xcode.app/Contents/Developer/Toolchains/OSX10.9.xctoolchain sudo mkdir -p /usr/local/lib; sudo ln -s /usr/local/mysql/lib/libmysql* /usr/local/lib
This script defines two types of responses, HTML and JSON.
Those requests produce JSON replies, depending on the individual field requested. Both the client and server scripts in this example involve DOM traversal. One uses JQuery style methods (
find). The other nokogiri (
gem install wunderbar nokogumbo opal opal-jquery sourcify ruby watch.rb --port=3030
If your web server is set up to handle CGI, you can drop this script directly into your document directory and run it. If you do so, the requests will all be handled in parallel.
I finally debugged why my cable service was so poor. Long story short, an inexplicable 7dB drop in the incoming line, a bad arrangement of splitters, and another unexplained 7dB drop someplace in the house.
Now for the long story:
My troubles started when Time Warner Cable required me to install mini cable boxes to see the full set of channels that I had purchased. I went from a slightly grainy picture to a clear picture on some channels and intermittent digital encoding artifacts on others. In most cases, slightly grainy was a marked improvement over digital encoding artifacts. In fact, for some channels the result was essentially unwatchable - particularly channel 3 (CBS) and channel 4 (PBS).
Two days ago, one television refused to show anything (what was shown was “Searching for Channels"). I started debugging by bypassing the box, and that worked, indicating that the cable wasn’t broken. I then swapped equipment with another room, and the problem stayed with the television and not the box.
Remembering that pressing "info” on the remote control would put the box in a debug mode, I found that the working televion showed -19.44 dB, and the failing television showed -20 dB.
Putting a signal booster on the working televison addressed the digital artifact problem. Putting the same signal booster on the failing television didn’t help.
Working assumptions at this point: with non-digital signals, picture quality degrades linearly with signal strength. With digital signals, viewability is more of a binary quality, and at -20dB the box simply refuses to show anything.
And there is a point after which there isn’t enough signal to be boosted.
Tracing back the line, it comes in from the street to a box, in that box there is both a 2 way splitter and a 4 way splitter, then it goes under the house and is split one final time before going to the two televisions in question. I suspect that the final splitter was added by the builder and not by the cable company. Similarly, I suspect that the additional 2 way splitter was added when we added a detached garage with a room on the second floor.
I then tested the signal strength at the box (before any splitters), and I found +5.6dB.
Based on this video, Time Warner should be providing me 10 to 15 dB. So the signal strength is about a quarter of what I should be getting. And I should be striving to get between 0 and 5 dB to each television.
The first splitter cut that in half, and the second splitter cut that by a factor of 4. The third splitter cut that by a factor of 2. And signal loss along the line should be on the order of another factor of 2.
That sounds like a lot, but in dB terms that’s about 18 dB of loss. Starting with 5.6 and subtracting 18 leaves -12.4. I am getting 7 less than that.
Looking back at my splitters, the first splitter fed half of the stength to one line. Tracing down that line, and that was to my cable modem. While that’s clearly dear to me, I suspect that this ordering was done when I had problem with my cable modem dropping signal. I since have replace the modem.
Reordering the splitters so that 3 lines only go through the four way splitter and two (analog only) signals go through both means that 3 televisions get twice the signal they were before, and the cable modem gets half.
Furthermore, replacing the final splitter with a signal booster means that the two televisions that were having problems now have positive signal strength.
So far, no problem with Internet, and the immediate problem with my TV service has been addressed.
Then again, I just got a notice today that four more channels will require a cable box, which leads to the following question:
If Time Warner Cable is moving towards Digital Only service, shouldn’t they be providing enough signal strength to drive all of the devices in the house?
Nearly six years ago, I set up a personal Jabber server using ejabberd. This setup survived the server migration to Ubuntu 8.04 and 10.04. This past weekend, I attempted to migrate that to a server running 12.04 and all I could get out of it was an erlang crash dump.
A quick scan for successors turned up prosody. Configuration was as simple as adding a
VirtualHost and setting
My usage is undoubtedly more idiomatic Ruby than idiomatic Chef, and I’m not tapping into the vast Chef ecosystem, but I can now provision a new virtual machine for running tests in under 3 minutes.
Background: I maintain a scenario that I test against multiple versions of Rails and multiple versions of Ruby. I primarily develop on Ubuntu, but I bought a new Mac Mini because some of my readers were having installation problems on Snow Lion and my previous mac min can’t install Snow Lion.
Armed with my new Mac Mini, I set off to to repeat my testing of various versions of Rails and Ruby. Whereas I have been using, and happy with, RVM on Ubuntu for dealing with Ruby versions; I decided to try rbenv/ruby-build. What I started with was a new machine, a full installation of XCode, the Command Line utilities, and Homebrew.
Keeping my build recipies up to date involves
rvm get stable for rvm, and
brew update followed by
brew upgrade ruby-build' if `brew outdated`.include? 'ruby-build' for rbenv.
Changing to the latest version of 1.9.3 using rvm is accomplished with
rvm use 1.9.3. If the latest version isn’t installed, this will fail, and
rvm install 1.9.3 will do the dirty deed. For rbenv, one needs to first get a list of versions using
rbenv install --list, select ones starting with 1.9.3, sort the list numerically by the last digits of the string, and select the largest result.
Building the latest 2.1.0 can be done with
rvm install ruby-trunk-nxxx where the
-n is optional but useful for tracking multiple “latest” versions. I use the subversion revision number for xxx.
Building the latest 2.1.0 is where this all goes downhill with rbenv on Mac OS X 1.8.3. There is a recipe for 2.1.0-dev. It automatically downloads and builds openssl to workaround a problem if it detects
has_broken_mac_openssl. I like that in a ruby build tool.
Unfortunately, building Ruby 2.1.0 doesn’t work with Apple’s provided version of autoconf, nor does ruby-build automatically know where homebrew puts its version of autoconf. Adding
/usr/local/opt/autoconf/bin explicitly to the PATH allows me to build exactly one version of 2.1.0-dev. Fortunately, I can rename that directory to add a
-rxxx. Unfortunately, the shims have hardcoded paths in the shebang lines. Fortunately, a symbolic link compensates for that. This does mean that in the infrequent event that I want to go back to a prior build of 2.1.0 I need to remember to update the symbolic link too, but I can live with that.
Determining when the previously build “latest” 2.1.0 is no longer current is a matter of finding out where the git checkout of the ruby repository was done to. For rvm, this is
$rvm_path/repos/ruby. For rbenv, the default is to throw this away, but this default can be overridden using the
--keep option, in which case it is saved to
$RBENV_ROOT/sources/2.1.0-dev/ruby-2.1.0-dev. If a
git pull in those directories comes up with
Already up-to-date., you are golden. If not, time to rebuild.
Rebuilding with rvm will make use of what I already have. Rebuilding with rbenv will fail unless I first erase the
$RBENV_ROOT/sources directory, after which point it will do a full download of both openssl and a full (albeit shallow) git clone of ruby.
Just when I think I am done, I find out that what works using foreground login sessions via ssh doesn’t work via background sessions like cron or Apache CGI. Symptoms are that ruby-build downloads and builds openssl, downloads and builds ruby, then complains with:
The Ruby openssl extension was not compiled. Missing the OpenSSL lib?. This took a while to isolate building Ruby 2.1.0 takes a while even with modern hardware. Ultimately, I determine that the build only succeeds if
/usr/sbin is in my
PATH. This makes no sense at all to me, but it works. Moving on...
At this point, I have everything working, but it certainly isn’t pretty. Time to refactor.
One way to approach this is to move all of this logic to a rbenv plugin. rbenv whatis is a step in this direction.
A better approach would be to first see which parts of this I could either motivate to get fixed in rbenv itself, perhaps even contributing code myself. The first step is to start a dialog with the developers, but alas I can find no IRC channel or mailing list. I see that others have resorted to using the issue tracker for this, but that doesn’t seem quite appropriate for this grab bag of small issues and WIBNIs I have at this point.
A prioritized wish list to start with:
- If a source is already downloaded or cloned, use it.
- Allow me to provide an additional revision number when building 2.1.0-dev. Either that or automatically do this for me.
- If homebrew has installed a later version of autoconf, use it in preference to the Apple provided version.
- Figure out what is needed in
/usr/sbinand either avoid it or make sure that it is found.
It is indeed possible that I overlooked something and/or one or more of these are user errors on my part. If so, pointers would be welcome.
Whatever is not done, I can put into a
rbenv get or equivalent command which does whatever is necessary under the covers to switch to the specified version, including build it if necessary.
Five years ago today, I bought a mac mini to do book development. On Wednesday, I bought a new mac mini simply because I’m told that Mountain Lion won’t install on a vintage 2008 mac mini, and because my readers have had problems on Mac OS X 10.8.
Overall, I have continued to be unimpressed, and can’t help but wonder why my open source friends seem attracted to this system. Even after downloading Xcode command line utilities, I kept encountering messages like “can’t find C compiler” and “C compiler cannot create executables”. Configure is something I haven’t run (at least not directly) for years, and my primary operating system is Ubuntu.
Apache no longer is something you can launch from the settings. System Ruby is still at 1.8.7. You need to upgrade openssl simply to install Ruby 2.0.
Scott Hanselman: Plex is the media center software ecosystem I’ve been waiting for
Unhappy with Time Warner Cable, I’ve been exploring netflix, dish, sling, roku, samsung, ffmpeg, handbrake, and cclive. Next up, some form of video capture device... at the moment I’m leaning towards Hauppauge.
I’m not quite prepared to declare Plex as the centerpiece of my home media center, but it certainly has become a key component.
Obligatory Cable Guy reference.
Mike Amundsen: I have the even greater privilege of working with Leonard and Sam on a new book - “RESTful Web APIs”. It’s scheduled for completion by the end of Q1 2013 and should be available soon after.
While I’m formally on this project, I’m not planning on doing any writing beyond possibly an introduction. As Mike put it, this book isn’t merely a 2nd edition, but rather more of a “follow-up” seven years on. I’m very much looking forward to seeing where Mike can help Leonard take this work.
I’ve looked at the markup being returned and it looks clean to me. The .htaccess file looks fine. A
git status command shows that none of the files on the server have been modified.
Can somebody identify what is causing Google to be concerned?
Peter Linss: I really want to see the TAG be more involved with the rest of the working groups at the W3C
I’ll come out and say it. I’m a skeptic. Each of the nominees are good people.
I’ll note that the three out of the four of the “TAG reformists” statements do NOT list getting involved with the rest of the working groups at the W3C as a goal: Alex Russell, Marcos Cáceres, Anne van Kesteren, and Yehuda Katz.
What am I missing?
It started with two notifications we received via postal mail. First Time Warner was going to start charging us rent for an outdated cable modem. Second they were going to drop a number of cable channels, but if I acted now, I could request a digital adapter which would allow me to watch these channels on exactly one TV.
So I did some research and purchased a DOCSIS 3.0 compatible modem that can do IP V6, figuring that would future proof me for a while, and connected it up. I actually managed to get an IP address assigned, but everything I tried after that was redirected to a site saying that I needed to be “provisioned” and to call a number. Upon calling that number, I got connected with a person whose sole purpose seemed to be to upsell me to a higher plan. After I politely but firmly refused, I was transferred and placed on hold for about 30 minutes. The woman that tried to help me get connected couldn’t get it to work so she transferred me to level 3. Another 5 minutes later, a gentleman picked up and also had trouble. It took him about 30 minutes to get it to work — apparently they didn’t give him instructions on how to deal with DOCSIS 3.0 modems despite my picking one of the options on the list they provided to me. But he was pleasant and apologetic throughout, and eventually did manage to get it working.
The next day I drove 15 minutes to stand in a 20 minute line to do what amounted to a 60 second transaction: here’s a box, here’s a receipt. Thank you and goodbye.
As to the dropped channels... I dutifully filled out an online form requesting a digital adapter, and got first a confirmation and subsequently a notification that the order was “complete”... where the latter merely indicated that something would be shipping in 3-5 business days, giving me a confirmation number. That was 18 November.
The box never showed up.
Yesterday, the channels went dark, and I went online. After using Chrome to override my User Agent so that I could make use of their chat system, I waited over 20 minutes for a representative. After checking, he said that there was nothing he could do for me, and gave me a number I could call. I called that number and was told that the wait time would be more than 30 minutes. As the chat window was still open, I asked if there was anything else I could do. He said call back late in the evening when the wait times would be less. I was not happy and closed the chat window. I was then presented with a survey, in which I responded that the person was not able to solve my problem and that I was not happy.
I tweeted to TWCableHelp and got
no response a DM five hours later asking me for my phone number. Before I went to bed, I sent an email. When I woke up I got a response indicating that the email had been forwarded to “our regional contacts”, who would be contacting me. They have not.
I called again, and was told that there would be a 20 to 25 minute wait time. It was closer to 30. I was told that another digital adapter had been placed on order. I asked for a confirmation number, and was told that she didn’t have one. I asked for an email, and she said that one would be sent within 48 hours. I was given a case number. And that was all she could do for me.
At this point, I have nobody I can contact, no tracking number, and no confidence that this time will turn out any different. And a number of black channels.
This process has turned a fairly complacent Time Warner customer into one that is actively seeking alternatives. In looking around, I see plenty of promo offers of more service than I have (basic cable and basic internet) for considerably less than I am currently paying. I am OK with waiting an hour or more for an answer, but I am not OK with having to be on hold for that entire time. And I’m definitely not OK with renting a separate box per device simply to get access.
So I am beginning my research: starting with looking for alternatives to cable TV. What I want is a single plan that allows me to watch whatever I want wherever I want. I am OK with upgrading my devices as long as we are talking about a purchase not a lease.
Any pointers people might leave in comments would be appreciated.
I see that Henri Sivonen is once again being snarky without backing his position. I’ll state my position, namely that something like the polyglot specification needs to exist, and why I believe that to be the case.
The short version is that I have developed a library that I believe to be polyglot compatible, and by that I mean that if there are differences between what this library does and what polyglot specifies that one or both should be corrected to bring them into compliance.
I didn’t write this library simply because I am loonie, but very much to solve a real problem.
The problem is that HTML source files exist that contain artifacts like consecutive <td> elements; people process such documents using tools such as anolis; and such libraries often depend on — for good reasons — libraries such as libxml2 which do an imperfect job of parsing HTML correctly. The output produced by such tools when combined with such libraries are incorrect.
Note that I stop well short of recommending that others serve their content as application/xhtml+xml. Or that tools should halt and catch fire if they are presented with incorrect input. In fact, I would even be willing to say that in general people SHOULD NOT do either of these things.
Now that I have provided instance proofs of the problem and the solution, I’ll proceed with the longer answer. I will start by noting that Postel’s law has two halves, and while the HTML WG has focused heavily on the second half of that law, the story should not stop there.
To get HTML right involves a number of details that people often get wrong. Details such as encoding and escaping. Details that have consequences such as XSS vulnerabilities when the scenario involves integrating content from untrusted sources. Scenarios which include comments on blogs or feed aggregators. Scenarios that lead people to write sanitizers and employ the use of imperfect HTML parsers.
It is well and good that Henri maintains — on a best effort basis only — a superior parser for exactly one programming language. Advertising this library more won’t solve the problem for people who code in languages such as C#, Perl, PHP, Python, or Ruby. Fundamentally, a tools will save us response is not an adequate response when the problem is imperfect tools.
This problem that needs to be addressed is very much the flip side, and complement to, the parsing problem that HTML5 has competently solved. Given a handful of browser vendors and an uncountable number of imperfect documents, it very much make sense for the browser vendors to get together and agree on how to handle error recovery. By the very same token, it makes sense for authors who may produce a handful of pages to be processed by an uncountable number of imperfect tools to agree on restrictions that may go well beyond the minimal logical consequences from normative text elsewhere if those restrictions increase the odds of the document produced being correctly processed.
Yes, it would be great if this weren’t necessary and all tools were perfect. Similarly, it would be great if browser vendors didn’t have to agree on error recovery as this makes the creation of streaming parsers more difficult. The point is that while both would be great, neither will happen, at least not any time soon.
These restrictions may indeed go beyond “always explicitly close all elements” and “always quote all attribute values”. It may include such statements as “always use UTF-8”.
Such restrictions are not a bad thing. In fact, such restrictions are very much a good thing.
Doug Sheppers: WebPlatform.org will have accurate, up-to-date, comprehensive references and tutorials for every part of client-side development and design, with quirks and bugs revealed and explained. It will have in-depth indicators of browser support and interoperability, with links to tests for specific features. It will feature discussions and script libraries for cutting-edge features at various states of implementation or standardization, with the opportunity to give feedback into the process before the features are locked down. It will have features to let you experiment with and share code snippets, examples, and solutions. It will have an API to access the structured information for easy reuse. It will have resources for teachers to help them train their students with critical skills. It will have information you just can’t get anywhere else, and it will have it all in one place.
But it doesn’t. Not yet.