Costco is selling the Samsung SyncMaster 2443BWX for some low price. I bought one as my third monitor, between two Gateway 2100s. The SyncMaster has higher resolution, a slightly larger screen, and seemed to have better specs in general. Once upon a time, SyncMaster was a dependably excellent brand.
It’s terrible and I’ll be returning it. It’s stand doesn’t adjust for height (or rotation), so I had to put it on a dictionary to bring it up to the appropriate eye level. Nonetheless, it has very noticeable off-axis viewing problems: the slightest amount of deviation in your head height and the bottom of the screen darkens.
If it’s your only monitor, you might learn to live with this, but I am in the situation where I have two much-better monitors flanking it on either side. So, if you are looking to join the prestigious 3-monitor club, seek elsewhere.
A tangential post on an email list I subscribed to touched on a subject near and dear to my heart, but rather than go off-topic on that list, I thought I'd respond here:
[Roger Penrose] believes that the existence and human knowledge of the random real (uncalculable) numbers ... shows that computers will never be able to match human brains.... [M]athematicians know, ala K. Godel, mathematics is consistent even though they can't prove it.... Godel proved that mathematics can't prove its consistency (withour being inconsistent along the way). The fact that humans can make such a proof and a computer simply CANNOT is the R. Penrose basis for his belief. R. Penrose, also, has some biological arguments about axons to further bolster his belief.
This is a summation of the position that Roger Penrose laid out in a book called The Emperor's New Mind that was originally published in 1989. As luck would have it, I had just been hired as the Technical Editor of the premier magazine on artificial intelligence at the time, so not only did I have a chance to read the book closely, I had the privilege of being a conduit for some of the discussion regarding it (in those pre-Web days). (I almost pulled off a debate between Penrose, Searle, and Dennett, which would have been awesome.)
The OP slightly overstates Penrose's position on Godel's Proof: we know it is possible for a computer to construct Godel's Proof (10 Print "Theorem XI. Let k be any recursive consistent..."), the claim is that a computer cannot know Godel's Proof to be true. That mathematical certainty is a phenomenon that is not available to Turing machines (i.e., computers as we generally know them).
Penrose's claim is that mathematical certainty is a privileged phenomenon; that is, it's a real thing in the sense that mathematicians definitely experience and that it's something that we can be sure that a (Turing) machine cannot experience.
I've always felt Penrose's claim to be dubious.
Certainty, it seems to me, is just the phenomenon that arises from short-cutting previously-decided hard problems. Humans are very good at creating such short-cuts and their phenomenal impact is very strong (i.e., we feel them strongly). Penrose's mathematical certainty accords with mathematics, other people have certainty that accords with their religious beliefs.
Any machine operating in real time is going to massively rely on shortcuts, both built-in (e.g., objects persist when out of sight) and dynamic (e.g., "The square of the hypotenuse of a right triangle is equal to the sum of the squares of the legs").
When a mathematician looks at a proposition, they do not exhaustively explore it. Humans do not recurse deeply into a problem, they very quickly switch to meta-reasoning ("I assumed it was true and went down this path and I assumed it was false and went down that path. Now I see that the path's in front of me are similar to the path's I've already traveled, so instead of going down those paths, I'm certain of the proposition's validity.") I don't see why a computer cannot do the same thing, replacing "I'm certain" with "I'll create a shortcut." (Implementation note: When your stack gets so big, reason about the stack. And no, you can't define this with pure recursion. You have to have an arbitrary limit to the tactic. But you don't need many levels of meta-reasoning to be beyond human facility.)
Of course, all of us know there's a mile of difference between knowing of something and experiencing it: the phenomenon of reading a thermometer on a stove is fundamentally different than burning yourself. Similarly, a mathematician would say that "certainty" is not the same phenomenon as "pattern-match and pop the stack."
But this is a long-discussed problem: it's even called "the hard problem of consciousness." My problem with Penrose is that I just don't see what his arguments brings to the table beyond this existing problem. I don't see how he makes the hard problem any harder.
The easiest way to solve the problem is to say that subjective phenomena exist outside the realm of classical mechanics, that there is a "ghost in the machine." Penrose posited that nanoscale structures in the brain might be a channel by which quantum-mechanical mechanisms are amplified to become large enough to interact with neurons. In other words, our consciousness might be mechanical, but dependent on an unknown aspect of quantum mechanics.
I've never heard any evidence that nanoscale structures are important to consciousness. They exist and are complex, so to my evolutionist mind, they are probably not accidental (although they may be). But whether that function is structural or related to mental functioning is, I think, an open question.
In short, I've always felt that Penrose, while undoubtedly brilliant, didn't really advance the debate greatly.
This from a reader of a newsletter sent out by one of my publishers:
First, quit being political by having garbage like this:
<QUOTE FROM NEWSLETTER>
In the spirit of new U.S. President Barack Obama´s call for service
in our communities, we offer up this e-mail from SPTechReport reader
I didn’t vote for that moron and it lowers your credibility and
questions your intelligence by including this kind of stuff.
Once upon a time when I was a magazine editor, we used the phrase “politically correct.” A natural desire on the part of an editor is to avoid writing that unintentionally offends (when you offend, you want it to be intentional and precise). “PC” was used when writing sloppily or unknowingly promulgated racism or sexism. It was generally used with an ironic tone to acknowledge the arbitrariness of language and sensibilities -- “That’s not PC! Rugs are ‘Oriental,’ people are ‘Asian’” – to which the appropriate response was “Thank you for raising my awareness.”)
As is often case, the irony was lost on the feverishly sincere. On the Right, “PC” became an epithet that allowed one to pretend that offense was solely the responsibility of the listener, not something engendered by the speaker. On the Left, I think it’s now universally acknowledged that the fear of offense has been counterproductive to effective dialogue (who am I, a white male, to offer a perspective on race or gender?).
Now, we get this, true “political correctness.” True, it is a political statement (“of or relating to your views about social relationships involving authority or power”) and I suppose that a thin-skinned Objectivist might find a “call for service in our communities” repellant, but I doubt that is what is at work with the letter-writer (anyone who labels Obama a “moron” fails the “grounded in reality” part of Randian philosophy).
Instead, we see exactly what both candidates in the last election decried; the knee-jerk reaction that anything “they” say is imbecilic and anything “we” say is self-evident and stirring. We have to move beyond that.
Especially because in this post-W age, a conservatism based on anti-intellectual ideology is a losing position for at least a generation.
I have a brand new 4” refractor which is my first “real” telescope (as a kid I had a let’s-say-3” Newtonian, your typical shopping-mall refractor, and I’ve had some decent binos and spotting scopes since then).
The world of amateur astronomy has vastly changed since I was a kid. Alan Zeichick called something “the greatest innovation since the big dob, the go to scope, and the SC.” All innovations which post-date my childhood! (Well, I think Schmidt-Cassegrain’s were hitting the scene…)
One of the options when buying a telescope today is a “go to” computer which knows the orientation of your tripod and your latitude. You type in what object you want to look at (Jupiter, the Andromeda Galaxy, the Wild Duck star cluster), move your scope around until the numbers on the computer read “0” and then look through the eyepiece (“Wow, there it is.”)
Such computers aren’t cheap and when researching your purchase, you’ll get a lot of “computers are all well and good, but ultimately, star-hopping is both effective and satisfying.” So, like me, you might decide to kick it old school, especially if, like me, you think “Gee, I know several constellations and can pick out Andromeda if I can see Cassiopeia.”
What I’ve concluded, after two nights of abject failure, is that (a) I’m an idiot and (b) star hopping is like programming without an IDE. The “I’m an idiot” aspect is simply reinforcing data that’s been accumulating for some time, so let’s skip over that.
It’s very difficult for an expert to anticipate what will baffle a newcomer. In the case of star-hopping, an expert won’t blink at “look 2 degrees SW of a hook-shaped formation found 5 degrees along a line defined by Alpha and Theta.” In the case of programming, an expert doesn’t need a tree of user-defined objects and methods taking up screen space. And the challenge to the expert is compounded because it’s not remembering what was hard that’s the problem – “Go To” scopes have only come along recently, IDEs have been around for 25 years but it’s only been about a decade since I think they surpassed the command-line (the breakthrough, I think, was the refactoring IDE).
For the cost of a “go to” mount, I could get two or three high-quality eyepieces. For the cost of an IDE (even if it’s just the time spent mastering an OS IDE) you could learn a different language or library.
As a newcomer, you face two different payoff curves (n.b.: not the same as a learning curve!):
The expert might say “Oh, eventually, you’ll appreciate the work of the slower, more ‘full-bodied’ learning curve:”
But even if you accept that curve, the issue of what to do is still difficult. You actually have to integrate under the curve:
There’s some period of time when the “easy” approach is more satisfactory. During that time, you are accumulating a surplus of satisfaction (the area ‘A’ in the above illustration). Ultimately, the “hard” approach may provide more satisfaction at a given moment, but there’s still a “catch up” period (‘B’) where your total satisfaction is still less than the total satisfaction with the “easy” approach (in a sense, you have to pay off a debt you’ve incurred). It’s only when you get to ‘C’ that the slower, harder approach really pays off.
To a professional, who moved through the ‘A’ and ‘B’ periods at a young age, ‘C’ dominates the curve and it seems natural to say “Slow and steady wins the race.” But if you don’t spend a long time in ‘C’ then the “easy” route is ultimately smarter. And, relevant to software developers, you are unlikely to be writing code at 65. If you’re a developer in the first world, you’re unlikely to be writing code at 45 or maybe even 35. The salary pressure from the BRIC economies is too great. The finish line is closer than you think.
As to “go to” mount I still don’t know what to do.
I’m as thankful as anyone that everyone got off the plane that went down in the Hudson, but I’m kind of ticked that all the news sites are calling it “The Miracle on the Hudson,” or “Miracle Landing.” That cheapens the tens of thousands of hours the pilot spent flying, the Lord-knows-how-many training and simulation sessions he’s been through, and his general awesomeness. (I don’t know much about aviation, but I know that water-ditching an A320 with both engines out and only 3,000’ of altitude to start with is not easy.)
Chelsea “Kick-Ass” Sullenberger (the Third).
For much of the past week, I’ve been doing a bunch of accounting stuff. I am incorporating (as “Faster Programmer LLC” – as in “Faster can mean both higher productivity and higher performance,” but actually as in “Faster, Programmer! Code! Code!”) and trying to get monetary things (new bank account, credit card, tax situation) as clean as possible.
Unfortunately, that’s meant not getting any work done for either my clients, the Jolt Awards (in full swing, although I have to wonder if this is not the last time), or my recreational programming.
Meanwhile, I got myself a present for my recently-passed 45th birthday, a StellarVue 102ED. I got first light through it last night (not a very dark night, but M42 was spectacular, of course), an astronomy club meeting is tonight, and then on Thursday there’s a star party up on Mauna Kea (How cool is that? Assuming, that is, by “cool” you mean “geeky.” Which I do). All of which means sleep deprivation.
I've been writing this blog since 2002, which I'm fairly sure makes me the graybeard among the Big Island's small blogging family. For what it's worth, I also was a magazine editor for 7 years (and won a few awards). So I'm going to shake my finger at the trio of Big Island bloggers who have been spending the past week time criticizing each other: Stop it right now. Don't make me stop this car!
D: You had every right to take those photographs in a public place and you had every right to write what you did. Don't let anyone tell you differently.
T: As a journalist yourself, you should know to be extra careful about labeling someone's writing as "irresponsible" on the basis of a differing account coming from a government official.
A: Don't get caught up between D&T.;
ResolverOne is one of my favorite applications in the past few years. It's a spreadsheet powered by IronPython. Spreadsheets are among the most powerful intellectual tools ever developed: if you can solve your problem with a spreadsheet, a spreadsheet is probably the fastest way to solve it. Yet there are certain things that spreadsheets don't do well: recursion, branching, etc.
Python is a clean, modern programming language with a large and still-growing community. It's a language which works well for writing 10 lines of code or 1,000 lines of code. (ResolverOne itself is more than 100K of Python, so I guess it works at that level, too!)
From now (Dec 2008) to May 2009, Resolver Systems is giving away $2K per month to the best spreadsheet built in ResolverOne. The best spreadsheet received during the competition gets the grand prize of an additional $15K.
Personally, it seems to me that the great advantage of the spreadsheet paradigm is a very screen-dense way of visualizing a large amount of data and very easy access to input parameters. Meanwhile, Python can be used to create arbitrarily-complex core algorithms. The combination seems ideal for tinkering in areas such as machine learning and simulation.
I try to do some recreational programming every year between Christmas and New Year. I'm not sure I'll have the time this year, but if I do, I may well use ResolverOne and Python to do something.
Bonus: Inference for .NET integrates with ResolverOne.
import sys import clr sys.path.append("c:\\program files\\Microsoft Research\\Infer.NET 2.2\\bin\\debug") clr.AddReferenceToFile("Infer.Compiler.dll") clr.AddReferenceToFile("Infer.Runtime.dll") from MicrosoftResearch.Infer import * from MicrosoftResearch.Infer.Models import * from MicrosoftResearch.Infer.Distributions import * firstCoin = Variable[bool].Bernoulli(0.5) secondCoin = Variable[bool].Bernoulli(0.5) bothHeads = firstCoin & secondCoin ie = InferenceEngine() print ie.Infer(bothHeads) --> c:\Users\Larry O'Brien\Documents\Infer.NET 2.2>ipy InferNetTest1.py Compiling model...done. Initialising...done. Iterating: .........|.........|.........|.........|.........| 50 Bernoulli(0.25)
Alexa Weber-Morales of Parallelaware asked me my views on "Software Transactional Memory: Why Is It Only A Research Toy?" given my recent column predicting that the stars are aligning for STM as the model of choice for the manycore era. (Incidentally, if they ever bring back Schoolhouse Rock, my suggested title for that column was "Transaction Faction Gaining Traction." The song practically writes itself!)
The Cascaval article is very interesting. Although it's the most pessimistic thing I've seen about STM, I tend to give lots of credence to people who say "We tried this for two years and it failed." On the other hand, the core of their complaints are "muddled semantics" and performance issues, both of which are fast-moving areas.
The Harris et al. paper addresses several areas of semantics, including exceptions, and it would be fascinating to hear Cascaval et al.'s reaction to that paper (and vice versa).
Performance... there's not a doubt in my mind that TM will only be practical with some level of hardware support. I'll go further and say that there's not a doubt in my mind that whatever concurrent programming model succeeds will require some level of hardware support. I don't think that's news. The challenge is making sure that you build hardware that's consistent, which fundamentally boils right back to the semantic issue. Without a calculus for this stuff, the hardware guys are flying a little blind.
So I look at these two articles and think that it's a little bit of Cascaval saying "the glass is mostly empty" and the Harris article filling up the cup a little and saying "looks like the cup is pretty full."
The glamorous life of a software engineering consultant, Pt. 1
Larry O'Brien says: (7:46:07 AM)
I see you're pushing testing and *scribble* out of 1.3.
Client says: (7:46:23 AM)
we need to move forward
Larry O'Brien says: (7:46:26 AM)
Well.... Not really....
Client says: (7:46:29 AM)
testing needs to wait until we get it done
Client says: (7:46:33 AM)
there is alot of things to fix
Client says: (7:46:42 AM)
it doesnt make sense to do test cases when it doesnt work
Larry O'Brien says: (7:46:51 AM)
Just for the record: I disagree with that.
Larry O'Brien says: (7:46:56 AM)
But it's your project...
Client says: (7:47:02 AM)
Client says: (7:47:08 AM)
I have to get it done this week Larry
Client says: (7:47:14 AM)
I cant do that with *Person1* writting tests
Client says: (7:47:25 AM)
fuck the automated test
Larry O'Brien says: (7:47:27 AM)
I understand your perspective
Client says: (7:47:36 AM)
Once I meet my deadline
Client says: (7:47:41 AM)
I will move back to tests
The glamorous life of a software engineering consultant, Pt. 2 (I swear, 1.5 hours later...)
Client says: (9:29:19 AM)
Larry question please
Client says: (9:29:42 AM)
arent we supposed to be showing *scribble*?
Larry O'Brien says: (9:30:21 AM)
Larry O'Brien says: (9:30:26 AM)
Larry O'Brien says: (9:32:29 AM)
i do not knw why *scribble*
Larry O'Brien says: (9:32:59 AM)
OK, I think I see what the problem is...
Larry O'Brien says: (9:33:13 AM)
I _think_ the problem is that we're seeing this *scribble*
Client says: (9:33:35 AM)
yes but didnt we *scribble* it to the *scribble* also?
Client says: (9:33:46 AM)
we used to show them in the *scribble*
Larry O'Brien says: (9:34:52 AM)
OK, so the *scribble scribble scribble*
Larry O'Brien says: (9:35:59 AM)
So what I'm saying is that to show *scribble* I _think_ in the *scribble* there should be *scribble*
Larry O'Brien says: (9:36:23 AM)
I don't KNOW this is the problem, but I THINK this is the issue
Client says: (9:40:37 AM)
ok I am going to submit this to jira also
Client says: (9:40:43 AM)
I hate finding this things
Client says: (9:40:45 AM)
so late on the game
Larry O'Brien says: (9:41:14 AM)
not to get antagonistic about this issue, but in my opinion, this is why testing is important early, not late
Larry O'Brien says: (9:41:34 AM)
this is why I delaying test development is not what I recommend
Client says: (9:42:28 AM)
sometimes each developer should see this things
Client says: (9:42:32 AM)
I believe in testing
Client says: (9:42:42 AM)
but also that each developer should do things right the first time
Client says: (9:42:47 AM)
I am so sick of findiing problems
Larry O'Brien says: (9:44:02 AM)
I don't want to get into an argument.
The June 2008 CACM contains the article "A Risk Profile of Offshore-Outsourced Development Projects"
Since this is a common profile, I thought I'd reproduce the Top 10 Risks. Some of these are universal across all project profiles ("Lack of top management commitment"), but others are definitely more problematic for offshored projects. I've highlighted those I think are notably different. All of these should be addressed in your project planning...
- Lack of top management commitment
- Original set of requirements is miscommunicated
- Language barriers in project communication
- Inadequate user involvement
- Lack of offshore project management know-how by client
- Failure to manage end-user expectations
- Poor change controls
- Lack of business known-how by offshore teams
- Lack of required technical know-how by offshore team
- Failure to consider all costs
That's not to say that I think the ones I did not highlight are unimportant, it's just that I think you can run into those issues onshore (if you replace "language barriers" with "poor communication skills").
Inadequate user involvement and poor change controls are, I think, more acute risks with offshore-outsourced projects because there's a certain amount of "out of site, out of mind" to these projects. It's not like people are hearing programmers talk around the watercooler or at lunch; offshore projects have a greater risk of 'going dark' for long periods of time. Similarly, with different working hours, different holiday schedules, etc., I think it's considerably more common for offshore work to get off the change-control rails. You really need to do daily check-ins with offshore teams, just like you do with local teams. It's harder and slower than a standup meeting, but I think it's definitely a necessary part of the daily routine.
I have a client who needs a Web-page component that does some photo compositing. Nothing super-fancy, but it needs to be professional, obey some business rules, and do some things dynamically based on static data.
The prototype is in Flash, but is filled with hideous programming -- magic numbers, a big monolithic function, etc. Today, the client said that they would be willing to accept the installed-base problem of Silverlight if I recommended it.
Well... It seems to me: Flash's programming story remains, if not terrible, nothing to write home about. Silverlight's programming story is pretty stellar -- a vast programming base from which I can draw the people to do the business rules and dynamic stuff (i.e., the programming). Flash may be beloved by designers, but for photo-compositing, I don't see a great advantage over WPF / Silverlight.
Am I wrong?
One way or the other, if you're a great Flash programmer or have some experience in Silverlight, I'm hiring... Drop me a line direct at lobrien -at- knowing -dot- net.
Update: "Not fair to compare poorly written Flash to green-field Silverlight" was one comment, but I am not comparing the code, I am comparing the code-creation possibilities (and, to some extent, the ecosystem. I think I want people who see the task as programming, not people who see the task as a design issue). "Try Flex," came a message from Adobe, which is certainly fair -- we have a prototype in Flash, Flex has a better programming story than Flash-the-development-environment, Flash is universal. Still looking for developers, but now I'll throw "Flex developers" into the mix as well...