(slot1 (error "Must provide value for slot1")))
That's something I haven't thought of doing before, I hope you find it useful as well. The rest of Robert's article is well worth a read.
When you're at a public computer, your friends would only send a chunk to you if you've managed to authenticate with them (using a zero-knowledge password proof, perhaps SRP). When you've collected enough chunks, you get the encrypted key, and decrypt it with your password.
You'd have to trust your friends not to collude to share chunks with each other, and even if they do, they still have to guess your password.
CLiki2 features real user accounts, effective anti-spam protection, and much improved article versioning (you now get graphical diffs like Wikipedia, among other things). Change notification for the whole wiki and for each individual article is provided by ATOM feeds. There is now code coloring support. Pages can be deleted for real. The layout and ATOM feeds have been designed to be accessible, and work great in console browsers.
The biggest casualties in the move were latin1 characters (some pages were a mix of utf-8 and latin1, and latin1 lost). In particular, if your name contains a lot of accents, I aplogize.
If you want to write about Lisps other than Common Lisp, the ALU wiki is the place. Right now it is a little sparse and could use more contributions.
Bonus link: Check out Takeru Ohta's github account for some neat Common Lisp software: https://github.com/sile
Periodically directories of recently added items or of musician-related messages would be printed out and left there. In other terminal locations, users sought out complete strangers to assemble car pools, organize study groups, find chess partners, or even pass tips on good restaurants.
One of the great things about Common Lisp is the variety of implementations and the scope of deployment platforms and performance characteristic trade-offs (run-time vs compile-time vs memory size) they encompass. This is also one of the things that complicates library development. Testing on any significant combination of implementations and platforms is not practical for most Free Software library authors.
cl-test-grid is a project that provides automated, distributed testing for libraries available via Quicklisp. You download cl-test-grid on your machine, tell it about which CL implementations you have available, it runs the test suites from many libraries in the current Quicklisp release, and sends the test results to a publicly available report as well as a public bug tracker. The report details test failures by Common Lisp implementations, platform/OS, and library version.
The report is a great resource for library authors and implementation maintainers. If your library is distributed via Quicklisp and isn't tested by cl-test-grid yet, Anton Vodonosov provides some tips for making your test suite cl-test-grid friendly. If you're looking to contribute to Free Lisp software, one of the best ways is to get cl-test-grid and run the tests. See if there are any failures in the test suites, and report the bugs to the library maintainers and help find a fix.
It turns out what I was thinking about in terms of friend logins without revealing passwords is a widely researched area called zero-knowledge password proofs. In particular, one application of zero-knowledge password proofs is encrypted key exchange, the patent for which just expired at end of 2011.
Even more directly applicable, there is a 2009 paper by Michel Abdalla, Xavier Boyen, Céline Chevalier, and David Pointcheval on how password-based public key infrastructure can be built using multiple nodes ("how your friends can help you remember your keys" - also see the slides).
One thing I haven't seen considered yet for P2P publishing is broadcast encryption. Originally developed with the goal of digital restrictions management for centralized distribution, broadcast encryption seems useful as a way to manage "unfriending" people in a P2P social network.
The confusion comes from two fields where continuations are used as a concept: compiler intermediate representations (where Continuation-Passing Style (CPS) is one possible form of intermediate representation), and First-Class Continuations (FCC), which is a metaprogramming technique for manipulating the stack of a program.
In terms of being a programming construct, continuations only exist in FCC. When it comes to CPS, a continuation is a concept only. In CPS, this concept is easy to understand: a continuation is the next step in a computation - the next instruction in the instruction stream.
For FCC, the concept consists of two parts: saving the current computation, and resuming a saved computation.
The main cause of confusion around FCC is that the First-Class Continuation programming construct is traditionally represented as a function (this has to do with the fact that FCC as an idea was derived from work in CPS). But a First-Class Continuation is not in any way a function. As Christian Queinnec points out in Lisp in Small Pieces, a First-Class Continuation is more like
Saving the current computation is like setting up a
catchblock, and resuming that computation is like
throwing a value to that tag - anything that happened between that
throwis forgotten. The thing that makes a First-Class Continuation special is that you can
throwto it anytime from anywhere, and you end up in the corresponding
catch, with whatever you were doing previously forgotten.
Marc Feeley has proposed an alternative interface for First-Class Continuations that makes invoking First-Class Continuations explicit, which is implemented in Gambit Scheme.
So finally we get to continuation as a word. As a word, "continuation" is used in three contexts:
- In Continuation-Passing Style as a conventional way of calling the next step in a computation.
- In First-Class Continuations as a name for saved computations.
- To indicate the next step in a computation when looking at a particular point in any given piece of code.
Christopher Wadsworth coined the term "continuation" for a way to formally reason about jumps/gotos, but the term can also be used informally when talking about a particular point in a piece of code. If you mentally use a different word in each of the three contexts, I think a lot of the confusion surrounding continuations can be cleared up. Something like the following might work:
- "saved stack"
One obvious use of WebRTC is providing audio and video chat in webpages without Flash (WebRTC specifies audio/video input, real-time codecs, and jitter buffers). Another obvious use is to develop a BitTorrent-like distribution protocol for videos; something like this should really cut down on YouTube's bandwidth bills.
The File API provides the potential to use WebRTC for any kind of P2P file sharing. This is I think the most exciting potential of WebRTC.
To step back in terms of commentary, the web browser has now come full circle to being a very weird virtualized operating system, whose APIs rival Windows in size, idiosyncrasies and complexity. The need for application hosting declines - both the application and "putting your data in the cloud" can now be handled by a Freenode or Tahoe-LAFS-like WebRTC library. What's interesting is that friction surrounding development and deployment should push new web applications away from a centralized server and towards the P2P approach. Unlike fully hosted apps, there is no extra effort involved in making an offline version. Server rent/lease and administration costs are reduced considerably by offloading as much data storage and computation as possible onto the browser. This latter possibility in particular should enable many startups to avoid the capital requirements needed for scaling up services that depend on the network effect, such as most "social" things. I don't like to use buzzwords, but this looks to be disruptive.
The tactics of the RIAA/MPAA mafia ("essentially the cultural arm of the United States") are akin to suing the postal service and manufacturers of cardboard boxes for making it possible for people to receive counterfeit goods. The intimidation is aimed at all strata of online individuals and organizations: viewers of audio/video, hosting services, and software developers.
I have observed before that the actual business of the RIAA/MPAA members has nothing to do with audio or visual production, but is all about controlling distribution. The goal of the ongoing intimidation is to prevent innovation in distribution channels, and force viewers/hosts/developers to conform to the single distribution model authorized by RIAA/MPAA, regardless of whether any copyrights owned by members of the RIAA/MPAA are actually involved or not.
An interesting observation is that the RIAA/MPAA doesn't actually know what that distribution model is supposed to be. They are content to do business as usual selling plastic discs (all the while attempting to lock out smaller content producers and distributors with initiatives such as Macrovision, CGMS-A, CSS, ARccOS, SDMI, CPRM/CPPM, Sony rootkit CDs, AACS, Trusted Computing, UEFI (in an unholy alliance with Microsoft), PlaysForSure, HDCP, COPP, PMP, DVB-CPCM, FairPlay, OpenMG, etc.), and pretending that the ol' innertubes is a regional on-demand cable network (Netflix, Hulu, Spotify). Selling singles on iTunes (regionally restricted, of course) is about as innovative as the RIAA/MPAA members have dared to get in online distribution.
The arrest of Kim Dotcom has parallels to the 2004 arrest of Isamu Kaneko, author of the Winny file sharing program.
I am a card-carrying member of the Pirate Party of Canada. Given all the above observations and my previously expressed opinions on the futility and negative consequences of online copyright enforcement, I believe it's not safe for me to head up a project one of whose possible uses is to distribute files at the current time.
On the other hand, I have gotten really interested in research around F2F overlay networks and cryptographically-enabled online privacy, so expect me to write more about that.
- Jason Kantz's JSGEN
- Peter Seibel's Lispscript
- Red Daly's PSOS
- Web frameworks: BKNR, UCW, Weblocks, teepeedee2
- Libraries: Suave, css-lite, clouchdb, uri-template, cl-closure-template
- Parenscript mailing list thread on multiple value return
- Manuel Odendahl's original Parenscript announcement
This philosophy made clear and transparent the issues of translating a high-level language into Von Neumann-style register machine code, and provided mechanisms to radically simplify the transformation. Continuation-passing style reified issues of control transfer, temporary stack values, and register use. Function arguments become registers, function calls - jumps. Difficult issues like nested scopes, first-class functions, closures, and exceptions become easy.
The other great, simplifying thing about Steele's philosophy is the self-similarity of the code as it undergoes transformation. The input of each stage of the compiler is a Scheme program, and (until machine code generation), the output is a simpler Scheme program.
Many compilers miss out on the self-similarity aspect of Scheme in two major ways. Those compilers that do follow continuation-passing style usually implement the CPS portion of compilation using completely different data structures than the other portions. This adds needless boilerplate code and makes the CPS portion look more complicated than it is. This needless complexity shows up in some ML compilers - in particular, Andrew Appel's otherwise excellent and highly recommended Compiling with Continuations, and causes people to remark that implementing SSA is not all that much harder than doing CPS (Olin Shivers disagrees).
The subtler, but even more serious omission is ignoring Steele's advice to implement the parts of the language above the intermediate layers as macros. This spreads the complexity for implementing features like classes and objects throughout the different portions of the compiler, starting with the parser.
Following Steele's techniques, building a Scheme compiler that is several times faster (and many times smaller) than the compilers/virtual machines of popular dynamic languages like Python and Ruby is easy. Scheme 48 was originally written in 48 hours. If you don't have that much time, Marc Feeley can show you how to build a Scheme compiler in 90 minutes. And if you're really strapped for time and don't want to sit through CPS theory, Abdulaziz Ghuloum shows you how to dive right into generating x86 machine code in 10 pages.
In 1978*, a year after the introduction of the Apple II, Xerox PARC built a portable computer with a 7 inch, 640 by 480 resolution bit-mapped, touch-screen display based around a commonly available 1 MHz, 16-bit microprocessor with 128 KiB** of memory running Smalltalk-76. The computer was called the NoteTaker.
The NoteTaker ran Smalltalk as fast as the Alto (the Smalltalk-76 VM (6 KiB) actually executed bytecode twice as fast on the 8086 as on the Alto, but the memory bus was much slower, making interactive performance feel similar).
I always thought the 8086 was extremely underpowered and good only for DOS and terminals. In hindsight, it's mind-boggling how long it took x86 PCs to catch up with the Macintoshes and Amigas of the 1980s.
* Note that Wikipedia and other sources give the NoteTaker's date as 1976, but this is likely the date when design started, as the 8086 design also just started in 1976 and the processor did not ship until 1978. The NoteTaker manual is also dated December, 1978.
** The NoteTaker manual specs the machine at 256 KiB of memory.
*** The current Wikipedia article about NoteTaker claims this computer would have cost more than $50,000 if sold commercially (presumably in 1978 dollars). Assume that the CRT, floppy disk, power supply, keyboard and mouse cost $2,000 (a whole Apple II system with 4 KiB memory retailed for $1,298.00 in 1977). The 8086 originally wholesaled for $360. According to the NoteTaker manual, the NoteTaker had a second 8086 which acted as an I/O processor but was "also available for general purpose computing tasks." It would have been entirely possible to replace the I/O processor with a much cheaper CPU. Looking again at Apple's price list, a 48 KiB Apple II board retailed for $1,938, while a bare-bones 4 KiB one sold for $598, which gives $31 per KiB. So 128 KiB would retail for $3,900 and 256 KiB would retail for $7,800. It certainly would have been possible to produce and maybe even sell the NoteTaker with 256 KiB of memory for less than $15,000. Note that a few years later, Kim McCall and Larry Tessler made a subset of Smalltalk-76 that ran in 64 KiB of memory, but with the full system image only about 8 KiB of memory was available for user programs.
**** The NoteTaker also came with an Ethernet interface.
PS - While researching this article, I also stumbled on another PC you've probably never heard of. The Kenbak-1, similar in operation and capabilities to the MITS Altair 8800 (both came with lights, toggle switches, and 256 bytes of memory), was sold via advertisements in Scientific American for $750 in 1971, four years before the Altair.
The first time the binary is run, it opens a web browser pointing to a page that asks you to choose a nickname and password (the binary will have a web server to present the local interface). In the background, it's going to generate a private/public key pair and set up the filesystem directory where you store data for that account. After that, it will let you look up friends online. I'm undecided about how this step will work. There are a lot of alternatives for federated, centrally located directories: just build a web one, XMPP (integration with chat seems like a good idea), IRC, etc. I don't know a lot about how the distributed options work, but they are out there and seem to work ok (Freenet, GNUnet, etc.). It seems like a good idea to make this part modular, but present it behind a consistent and easy to use interface.
The binary will provide an HTTPS port to the outside world to let your friends log in remotely (see previous discussion). If you're behind a NAT gateway, it's going to take some extra work - both the NAT-PMP and IGD protocols will need to be supported.
In general, UDP hole punching needs to be present to let two ClearSky nodes talk to each other, although it alone is not enough to provide remote logins from the outside. UDP hole punching needs an accessible third party to be present - using either a friend's computer that has an unrestricted connection, or a central server if no such friends are available. Using unknown third parties is problematic because it relies on either volunteers (every ClearSky client can be a volunteer, but this opens up possibilities for surveillance) or your mirror provider.
All traffic between you and your friends goes over SSH. Messages and files are cryptographically signed to ensure authenticity, but encryption during transport is provided by SSH. The ClearSky client should encrypt data before writing it to disk to ensure privacy even if the hosting computer gets lost or stolen.
Apps are going to be started in a separate process (via fork(), or by running the same executable with different options) for extra security, stability, and to let the OS handle resource control. Games, content libraries, chat, voice, calendars etc. are all possible apps. One example app that could be built but that's not possible with Facebook is a Tahoe-LAFS that would let you trade harddrive space with your friends to securely and redundantly back up your private data.
Apps will communicate with the ClearSky process via sockets and the file system. Security is going to be provided by language-level virtualization. The app runtime will prompt the user when the app wants to access filesystem folders or other resources. Things like video and audio will ideally be handled by HTML5, if not then by Flash. It's undesireable to have the app display native UI or use the OS multimedia or other capabilities directly - this shouldn't be included in the platform unless there's a good concrete demonstration that it's needed. OTOH, having an app be able to manage its own sockets may be a good idea.
It would be nice to share a large number of large files securely with your friends over a high-bandwidth link. As well, having a reliable server on a high-bandwidth link would make replication and backup of your data more convenient.
A large number of companies offer storage services that can do that. Dropbox is the most popular, but it doesn't provide convenient sharing features because it focuses on replicating and syncing your private files. Ubuntu One is similar. These services are also insecure - Dropbox and Ubuntu One can read your files.
The service that's most convenient for private and easy to share storage on remote hosted servers was Allmydata. Allmydata may be gone, but the people behind and software behind it (Tahoe-LAFS) are still there. Tahoe-LAFS is Free Software and the techniques and algorithms well documented.
But storage and bandwidth aren't free. How would the economic model behind a secure, private mirroring service work?
Very few people want to pay cash for mirroring. Can't have the mirror show you ads - that would defeat the privacy aspect (besides, how would these ads be served and where would they be viewed?).
But your computer is going to be on a lot of the time to participate in the p2p network. Why not have it mine bitcoins? Then you can pay for mirroring storage/bandwidth with the bitcoins gained from contributing work to a mining pool.
Metcalfe's law hypothesizes that each node brings value to the network when it joins. A logical conclusion is that not only the network, but the node itself should be able to realize that value in monetary terms. Cryptographic currency mining enables this.
Another long sought-after idea that may be enabled by the mining approach is micropayments, but that's a problematic and unrelated area.
- FreedomBox Foundation
- Dave Taht
- Kragen Javier Sitaker
- Singly (see also this hype piece)
- The guy behind Unqualified Reservations
- Opera Unite
- Vinay Gupta
Why no one has been successful so far is that the solutions offered are too hard to use (if it's any harder than Facebook, very few people will use it), don't address the issue of replication (Tonido and Opera Unite, for example), and don't have sustainable revenue models. I think that with the right design, engineering and business approach, these problems are not insurmountable.
You might notice I left out one project: Diaspora. A look at Diaspora's source code should convince you that while the project claims the same goals, the approach they're taking does not address any of the three aspects mentioned above.
The account name and password combination is convenient to remember and seems to work well in practice for logging in to computer systems.
It's easy enough keeping your password secret when you log into your own machine, and you can use public-key cryptography to have trusted communications with your friends. You can remember your account name and password, but you're not going to be carrying around your private key (if such a system is to work for most people, you're probably not even going to be aware you have a private key).
You can trust your friends, but you don't want to tempt them by transmitting the cleartext passwords to their machine (ever have an obsessive ex as a friend on Facebook?). Having a unique password for each of your friends' machines won't work because you can't remember them all.
On the other hand, you can assume that your friends probably won't want to crack your password hash to get your password, and if their machine ever gets stolen, they'll tell you so you can change your password (ok, maybe that last point is not true).
Your friends don't have your private key, but they can sign your messages with the private keys on their machines on your behalf. If you see strange messages signed on your behalf, you can assume that either your password has been compromised, or your friend's machine is acting maliciously (because it has been stolen, compromised, or your friend is doing the equivalent of the "let's post "I'm pregnant" status update" joke when you forgot to log out of Facebook).
You still have your private key on your machine, from where you can change your password, repudiate the fake messages, and publish a revised list of friends that you trust to sign messages on your behalf.
I'm sure this scheme has been thought of before, and I'm sure it has problems I didn't see. Any thoughts or comments? Where should I post this to get the opinions of people knowledgeable on cryptography?
Obviously, the Apple app store model of cryptographic signing is useless as a measure of trust of what an application does in a decentralized scenario (note that this is different than using signing to establish authenticity during application distribution). The Apple app store model of cryptographic signing is actually useless for Apple app store apps as well - I know of at least two apps that have made it into the Apple app store that will keep an open, non-password protected telnet port on your iPhone. So centralized quality control does not work.
Will virtualization be able to solve the safety problem? What virtualization is really about is names and meta-machines. In the case of running a VMware virtual machine on your real PC, your real PC is the meta-machine from the point of view of the VMware one, and memory addresses are the names. As long as the software in the VMware box has no access to the memory addresses belonging to your real PC, it cannot escape to do things that the VMware virtual machine cannot already do (you can think of virtualization like being the plot of The Matrix).
Even if the virtualization is unbreakable and the virtual machine does everything you need without posing a danger to your data (and how would it do that if it needs to modify your data to be useful?), you cannot tell whether the app that the virtual machine will run doesn't have secret back-doors or information leaking channels (both can be accomplished using steganography to hide things in the data the app consumes and produces as part of its regular operation). Checking compiled binaries for these is simply not feasible.
The only alternative that is left is the one advocated by Richard Stallman - you can't trust a program unless you can read its source code. While it is still possible to hide back-doors in source code, it requires a very large and extremely messy code base to hide them effectively. And very large and extremely messy code bases tend to lead to shit applications that no one really wants to use (unless corporate IT makes you). Applications with clean code bases that are easy to audit are nice; use them.
While there is no way to prevent these two techniques (and in fact it is undesirable to do so; but it doesn't stop Apple from trying), you certainly should be able to have the freedom to use applications written and audited by people you trust. The centralized, signed app store approach used by Apple destroys this continuum of trust by putting all applications on an equal level - "approved by Apple" doesn't mean much when the approval process is secretive, arbitrary, and does not guarantee quality or security.
Are there any other benefits to source-only distribution besides security? Plenty: complete portability, high performance (compile to native code), tiny download sizes, easy dependency management, etc.
One way to encourage the ideal of "easily auditable source" is via licensing. This is where the innovation of Henry Poole comes in handy - the Affero GPL is the most business-friendly of all Free Software licenses (more on that in a later post) and when used effectively will enable disruptive new business models. The "platform" part of the hypothetical ClearSky virtual machine will benefit tremendously from being licensed under the AGPL.
The only thing fuzzy about The Cloud is the definition of the word. It's basically a way for large companies to bamboozle you into giving up your privacy.
There's also a lot of wishful thinking reality distortion involved here. Large companies would love it if the only way people could do anything was in The Cloud, because today that word really means "hundreds of thousands of servers," and only large companies have enough capital to afford that, keeping competition out. Hosting companies like Amazon (there, I've said it) love it even more, because they get better utilization of their hundreds of thousands of servers, and therefore of their capital.
This situation should remind you of something: the 70s. This isn't really any different than the mainframe era. Can't afford
So why are all these hundreds of thousands of servers needed? In the case of Google or YouTube, it seems pretty straightforward - it takes a lot of space to index most of the Internet and host millions of cat videos. But what about Facebook? What's really on there besides a few hundred (or maybe thousand) photos, links to YouTube cat videos shared by your friends, and your whiny status updates? It's hard to purchase a USB memory stick that would be small enough not to fit all that three times over.
It turns out there are a lot of other things on Facebook. Things like your click stream (ie - things you don't care about), which is needed for analytics (ie - using statistics to invade your privacy) to better serve you advertisements (ie - make you buy things you don't want).
The only reason Facebook needs to be "web scale!!!" is so they can sell ads. The only reason they're selling ads is because millions of people signed up for Facebook. The only reason millions of people signed up for Facebook is because it's the easiest way to share photos, links to cat videos, and important news about where they had lunch with their friends (oh yeah, and play Farmville or whatever).
Putting it this way, it's obvious why Facebook needs to sell ads - their service isn't valuable enough to you to get you to pay for it. So it's certainly not worth your time to set up your own web server with a bunch of pub/sub protocols and get all your friends to do the same so you can log into your MyFace social network to post inane status updates and grow virtual beets from other random computers connected to the Internet. That's hard, and who wants to pay for hosting?
Except you and your friends already have one or more computers connected to the Internet. What if it was easy?
What many people don't know is that version 3 of ledger was written in Common Lisp. This version was never made into an official release. In a FLOSS weekly podcast, Wiegley explains (31:00) that Common Lisp wasn't the best choice for ledger's users.
I emailed John to learn more about this. He replied that there were only two major problems: building cl-ledger was more difficult than building ledger, and that cl-ledger could no longer be scripted from the command line. In effect, the Common Lisp REPL had stolen the place of the Unix command line as ledger's interface.
cl-ledger was written in 2007, and there are now good solutions to these problems. ASDF works well as a build system, but before Quicklisp, dependency management for Common Lisp applications was difficult. Quicklisp solved the biggest outstanding problem in building Common Lisp applications. (PS - you can give Zach a Christmas gift for his work on Quicklisp)
Didier Verna's Command-Line Options Nuker is a widely portable Unix CLI library with many features that you can use to build command-line driven Common Lisp applications.
Money, trade, and even labor is worthless unless it satisfies a particular desire in a particular point in time (see Deleuze and Guattari's Anti-Oedipus about the role of desire in capitalism).
It's easy to accept that money is a social convention, but how can labor be worthless? The famous illustration is Bastiat's "broken window" fallacy. While Bastiat's conclusions are correct, his reasoning starts with the wrong assumptions. Wealth has nothing do with the classical notions of trade and utility and "money better spent elsewhere" - the broken window fallacy is directly explained by the material reality of objects.
The key thing to understand is that material objects are cumulative and impermanent. These two qualities are what drive everything else about wealth.
A window is cumulative in that it satisfies a desire and has a physical manifestation, and it is impermanent in that the physical manifestation is now broken and the desire is no longer satisfied. Even if currency were completely devalued tomorrow, the fact that a window is there would still satisfy the desire.
This is why the labor theory of value (indeed any other theory of value that doesn't take into account Deleuze and Guattari's desire machines as the ultimate driver of the economic process) is wrong. Labor or technology itself has no value whatsoever, unless it ultimately (if through a long series of immaterial social transactions) results in the production of a material object that satisfies a desire.
So far this essay has talked about labor, but what about trade? To understand trade, first we need to define the word: there is no such single thing as "trade," rather the word refers to two things, which may or may not be present in a particular "trade" (transaction): the social (buyer/seller, consignee/consignor relationship) and the material (embodied in labor as transportation of material objects). A purely social transaction would be finance, a purely material one would be theft.
Trade is obviously important in satisfying desire - if a material object is not in the right place at the right time, it can't satisfy that desire. A chain of social trades resulting in the transportation of a material object then obviously has value.
One of the most pressing questions today (see: SOPA act), is where does this leave purely social transactions? If you want $10 for a pile of bits, money that will buy you lunch, but someone else is satisfied with a "thanks for sharing!" for the same pile of bits, where does materialism come in?
The material reality of the world is that those bits are worthless. The movie, music, and publishing industries were built on material objects: selling time slots in seats in a movie theater, selling vinyl and plastic discs, selling bound stacks of paper. The particular content on those material objects was in a very fundamental way completely irrelevant to their business, even if paradoxically it was the key to their business model.
Knowledge may be cumulative, but it is worthless unless it can be applied to satisfy a desire. It is also permanent - it cannot be stolen. What knowledge is great at is helping produce better material objects with less cost and greater ability to satisfy desire.
The real competition to the movie, music and publishing industries are the computer manufacturers and ISPs.
What the MPAA and RIAA and the SAG are doing when they attempt to put in digital restrictions management into computer hardware and force ISPs to filter content is the equivalent of the Luddites burning water mills and power looms. This is a strategy that will ultimately fail, but in the short term causes a slow-down in the rate of improvement of material objects, both directly (PCs and Internet connections suck more because of attempts to implement digital restrictions management), and indirectly (this improvement in the production of objects is driven by knowledge produced with the aid of PCs and the Internet, in a cumulative process).
So what about the MPAA excuse that no one will be able to finance the production of big-budget action movies anymore? At a time when the very same progress in material production is drastically reducing the cost of producing a movie (via an all-digital process and computer-generated imagery), this is exactly like arguing that no one will be able to afford to author books during the time of Gutenberg's invention of the printing press.
The creative urge is a desire in and of itself. If there's anything you should take away from this essay, it's that people pay to have their desires satisfied.