This is not a blog. I can post here, but you can't! It is as plain as plain text can be. Sure, I can change the background color, font face, things like that, but what's the point? Fancy borders and pretty pictures can't make dull text lively.
This is not Facebook. There is no button you can push to tell me and whomever is looking over your digital shoulder that you liked what I wrote. Frankly, I don't care if you liked it or not, because this is for me. It is a place where I can record thoughts and ideas and have them available later on from anywhere and at any time. It is a place where I can exorcise my personal demons ( or, perhaps, exercise them.)
It is random, disorganized, and possibly infuriating - like me!
I often see a young man in the weight room wearing a fraternity t-shirt with some combination of Greek letters on the front and the following syllables on the back, each on a separate line: No Bitch Ass Ness. Every time I see it I get a bit angry because I dislike seeing the language abused in this way. It should read, of course, No Bitch Assed Ness, with the stress on the second syllable of Assed.
Physicists have a theory called the Many Worlds Interpretation of Quantum Mechanics, which holds that our universe is constantly bifurcating into multiple versions of itself, each of which goes merrily on its way, splitting further, and so on ... Thus, in one such world my doctor may nick my aorta and I bleed to death, while in another I survive the operation and live long enough to see a medical cure for aging and effective immortality.
The idea seems to have originated with a Princeton graduate student named Hugh Everett III, who developed it in his PhD dissertation. I confess I haven't studied this work sufficiently to understand what governs the timing and variety of the splittings, but one may envision an extreme case in which they occur at every instant, and in which the number of paths taken is infinite - infinite to the extent that every outcome that is logically possible must ensue in at least one of them. Thus, e.g., the wizarding world of Harry Potter, assuming it is logically possible for it to have developed from some point in our past, must occur down some branch of this humongous universal tree. (The tree metaphor and words like 'bifurcate' hardly do the thing justice, but will have to serve.)
It is an invigorating idea, and one which seems to have gained a certain currency among modern physicists. More importantly, to me at least, I have come to believe strongly that some form of this idea is inevitable if Schrodinger's cat and other quantum weirdness is to coexist with rational sanity.
'But wait!', you say, 'what about consciousness? Which branch among all these possibilities will my consciousness actually experience?' Simple: all of them. Your consciousness splits along with the universe, each newly created version experiencing its own world independently of the others, and each one blissfully unaware of the others' existence.
Given all of this, I only ask that you grant me one further rather obvious point: In the universe you are currently experiencing, you are alive. Now don't get me wrong. Many people do die in Everett's multiverse, at many times, in many places, and in many parallel ways. Too bad for them! But you , the one reading these words, the Governer's call has always arrived in time for you. You have obviously survived the open heart operation. And you will be alive on the day science finally conquers death.
There is something ineffably sad about the death of a young child. So many experiences you treasure that they never got to share. The taste of that first kiss. The color and texture of the sky on the day you first realized you were in love. The scratchy feeling of the diploma in your hand, and the look on your parents' face when they saw you holding it ...
We have an annual 5k race in this area named "Paige's Butterfly Run" in honor of a girl who died of leukemia at the age of 8. In late spring you can see flyers for the race in many places, each with its picture of a little girl with bright wide eyes, as if still following the flight of one of her beloved butterflies.
I never met you, Paige. Heck, I never even ran in your race. But I always get a little lump in my throat when I see your picture.
On the green
Near the hole
He's the finest player out of Tokyo!
While we are on the subject of figures of speech, what kind of figure of speech is used when someone denigrates a quality or idea by using a rhyme? As in "Fancy Schmancy"? What kind of figure of speech is it when a noun is used as a verb, as in "Bell the cat"; or when a verb normally used only in the active voice is used in the passive voice, as in "Run the water from the tap."
Computer simulation is becoming an ever more precise and powerful tool, so it is interesting to speculate about a time, say 10 or 20 Moore's Law doublings into the future, when it is possible to simulate the real world to nearly arbitrary precision. The simulated events don't really happen inside the computer, of course. Instead of a real boy shagging a real football, we'd see inside the machine only an enormous sea of binary digits, surging and seething in response to some sophisticated set of programmed-in equations. But look at that sea of ones and zeros through the right sort of translating apparatus - digital glasses, if you will - and you could watch the world unfold on screen in exactly the way you would see it if you peered around behind the back of the screen.
You might even be able to buy one of these as an app for your iphone someday.
Scientists already use simulations all the time to make predictions about the future. For example, we can be almost certain, based on simulations of large bodies moving under the influence of gravity, that the Great Galaxy in Andromeda (M31), and our own Milky Way galaxy, will collide with each other in another 20 billion years or so. But I'm talking about a level of speed, accuracy, and sophistication in simulation that is almost unimaginably more powerful. Powerful enough to track the exact motion and state of every molecule in your body, the circulation of electricity through your nervous system, and the same for every person, animal, and object in your vicinity. Unimaginable, perhaps, but in the fullness of time I know of no physical law that would preclude the development of such a device.
Once the existance of this gadget (in theory) is granted, then we can further imagine souping up its internal clock - running it in overdrive, so to speak. Run it double speed, triple speed ... and watch the events of your day tomorrow unfold on screen over the course of the next five minutes. You'd have a handy gizmo for looking into your future; and assuming you had worked into your plan for tomorrow a quick peek at the financial pages, one that could prove quite profitable too.
It may seem that we are now entering a paradoxical area, one haunted by the usual demons that plague the concept of time travel. One can imagine an amusing premise for a science fiction story in which the hero, seeing himself on screen dying in a car accident tomorrow, resolves to take steps to prevent it. He will tie himself to his bed if he has to! The machine, of course, already knew what he would see when he looked at the screen (as the machine knew he would), as well as the incredibly unlikely succession of events that leads ultimately to the fulfilment of the prophecy in every detail.
Once you can view the future, why not view the past? Run the machine in reverse at high speed and watch what really took place in the Globe Theater back in Shakespeare's day. Would historians become obsolete? Not a chance. All the more need for them to explain the significance and meaning of the events we see so clearly depicted on the screen. Indeed, I predict that history would become a burgeoning growth industry, and that there would be the real danger we might become so enamoured of unravelling our past that we neglect to enact our future.
Like everybody else I sometimes play the "wish game": what would I wish for if I happened to encounter a genie in a giving mood? The usual top choices - a million dollars, immortality, infinite wisdom, or a date with Jennifer Lopez - certainly have their appeal, but I think I'd rather opt for a chance to experience what it would be like to be another person, if only for a short time. Having been trapped inside this one head for 58 years now, the chance to be inside another's, to think their thoughts and feel the sensations from their body instead of my own, would surely be enlightening in a way that no other experience could ever be.
It is revealing that I used the phrase "inside this one head", because we all share (don't we?) the powerful sense that our conscious minds inhabit the roughly one foot diameter spheres enclosed by our crania. It is there, in that rather cramped space just north of our necks that we truly seem to "live". But could it be this powerful sense of location is just a coincidental byproduct of the fact that our eyes are in our head? Asking a blind person wouldn't settle the question because our ears are also on our heads.
What about Helen Keller? Did she feel that her consciousness resided in her fingertips? Or that it was somehow spread uniformly over the entire tactile surface of her body?
Did anybody think to ask her? Would she even have understood the question?
People who slavishly press the key fob button a third time for the satisfying beep that "confirms" the car is really locked. The only thing it confirms is that the beep is working. Test the doors if you want to be sure they're locked.
Emboldened by the success of those astronomers who saw fit to cut the former ninth planet down to size, I propose a mathematical movement to remove the number 2 from the ranks of the prime numbers. Let's face it: as a prime, the number 2 is definitely an oddball. For one thing, it's even. Seemingly countless theorems in number theory require exceptions or additional assumptions to handle the case p = 2. Be it resolved then: Henceforth, the number 2 shall be called a "dwarf prime."
I have been an enthusiastic fan of the Search for Extraterrestrial Intelligence (SETI) since I was a schoolboy. I yearned to be the Project OZMA technician at the controls when the big dish at Green Bank was slued to center its crosshairs on Epsilon Eridani, and in came a morse code like signal, endlessly repeating the first 8 digits of the continued fraction expansion of pi. The desire to train a receiver on the heavens, spiced as it is by the lust for the hunt, has existed since soon after Marconi's "Come here, Watson. I need you!" began its eternal journey across the cosmos.
Now, after more than 50 years of ever bigger radio dishes, more sensitive and capable signal scanners, vastly more powerful computers, and essentially nothing to show for it, some are beginning to ask "Where are they, already?" (See, for example, Paul Davies' excellent book, "The Eerie Silence.")
There are plenty of reasons to believe that they are, indeed, out there, and that we have so far just failed to notice them. Modern exo-planet discoveries strongly suggest that almost every star has a planetary system, and that the variety and novelty of such systems far exceeds what we could have imagined in the days of OZMA. Moreover, a nonzero fraction have modest sized stony planets within their star's habitable zone. There must be hundreds of millions of such places out there. It doesn't take that much imagination to think that, over a billion years or so, something might begin to stir in the muck and stagnant ponds on not a few of them. And, given the obvious survival value of intelligence, that within another 3 or 4 billion years (or so), some ape-like creature might toss a bone in the air and get a far-reaching idea...
To be honest, there is still a lot of debate about just how hard the transitions from non-living to living matter, and from simple to intelligent life, really are; but for me the proof is in the "mediocrity principle": Both transitions happened here. Moreover, throughout our history, without exception, every time we came to believe we were special in some way, that belief was soon overturned in emphatic fashion.
SO, WHERE ARE THEY, ALREADY?
It may well be, as a few have suggested on the internet recently, that alien signals are being broadcast in some encrypted form. Our own digital communications signals can be boiled down to long streams of binary digits - zeros and ones - and there are sophisticated methods that allow the sender to scramble up such streams until they appear to the outsider completely random, like a message obtained by tossing a coin. They are not random - they only appear to be - and the intended recipient has a key, secured beforehand, that will allow him to unscramble the message and discover its meaning.
Well, isn't a sequence of zeros and ones, even if highly scrambled, still a sequence of zeros and ones, and not the universal background hiss of white noise emitted when hydrogen atoms randomly scattered among the galaxies make their transition to the ground state? Wouldn't it still be recongnizable as some kind of message, even if we couldn't read it?
Yes, but it's pretty easy to imagine transmission schemes that really could successfully masquerade as cosmic background noise. Here is one such idea, loosely based on the concept of AM radio transmission. In standard AM radio, the station broadcasts an electromagnetic wave at an exactly specified frequency, 1150 kHz, for example. The undulations of the waves pass by a given location - of the radio receiver, say - in lock step, swinging back and forth from one electronic exteme to the other 1150 thousand times per second. American teenagers of the 1950s, up late over their chrystal radio sets hoping to pull in London, were familiar with the hiss of this signal, still being broadcast by stations that had gone off the air. The hiss by itself isn't very interesting, but when in active use the station would vary the amplitude (or size of swing) of the waves by stamping an audio signal on top of them. Tune a receiver with the proper electronics to cancel the 1150 kHz "carrier" signal, while retaining the much slower amplitude variations, and Presto! Out comes Carly Jipson singing "Call me, maybe?"
Now imagine that we broadcast on two different frequencies simultaneously, 1150 and 1290 kHz, e.g. Before stamping the audio signal on one of the two carrier signals, we add a random signal made from white noise, and before stamping the other carrier, we subtract the same white noise signal. Now tune a special reciever that is designed to AVERAGE the two decoded AM signals, and out pops Carly's voice again. But tune any ordinary AM receiver to either frequency individually and all we hear is the hiss of white noise!
If aliens really use some such system, the task of the SETI researcher randomly surfing N different frequency channels in search of a meaningful signal becomes much more difficult. She must now search roughly N-squared possible pairs of frequencies. And it gets much worse. Why stop at systems that only use two frequencies? Why not 3? Or 10? Or 10,000? Or a million? A scheme that combined input from a million different frequences in a way that could only be unscrambled by some sophisticated mathematical function applied to all million simultaneously, would be unimaginably difficult to sift from the cacophany of the heavens.
Okay, you say, granted, the ETs have potentially means and opportunity to hide their conversations from us in plain sight, but what would be the motive? Why not simply broadcast things the way we do, willy-nilly into the night? What's to be gained by all this encryption? Well, consider this. Most likely, interstellar commerce in physical goods is unfeasible due to the vast distances - and consequent expense - involved, but commerce in pure information would be quite feasible, albeit a bit slow due to the limited speed of light. There could be a vast and vital market going on amongst the star systems in which both the commodity and the currency is pure information. There is, after all, nothing more powerful than information. When the aliens in the movie Cosmos wanted us to pay them a visit around the star named Vega, did they send us a space-ship and operator's manual? No, they sent us the instructions to build our own. Much cheaper and, in the end, equally effective. Given the existence of such a market, why would any self-respecting advanced civilization waste its marketable secrets by blabbing them out load so unwashed rubes like us could hear them, without even so much as a Mozart symphony in return?
Conventional SETI has not succeeded, and considerations such as the above suggest that it is like looking for the proverbial needle in a haystack, but with the added difficulty that we haven't the foggiest idea what a needle looks like. We need a whole new approach, and, while Paul Allen is free to spend his billions any way he wants, I don't think more and bigger dishes, better recievers, and faster computers will do the job.
Confirmation bias is the foible of human reasoning by which we attach greater significance to events that bolster what we already believe (or want to believe) while downplaying the significance of disconfirming evidence. Take for example the common experience of having a bad dream and learning the next day that some close relative has passed away during the night. Later on, when such experiences are related to hushed audiences around the campfire, they produce the eerie feeling that "there must be something to this ESP business". But what about the hundreds of times we had a bad dream and awoke to find that nothing whatsoever untoward had happened during the night?
Prejudice, another unfortunate human foible, grows on confirmation bias. Even the most open minded and well educated among us harbors some tiny kernel of prejudice against one group or another. Planted by our culture, like original sin, before we were old enough to realize we were being manipulated, these subtle cues provide the basis for what may later morph into full-blown prejudice.
[ to be continued ]
Josh was a relatively normal boy. His parents thought he might have a touch of Asperger's Syndrome, which would explain his awkwardness in social situations. Sometimes he had ideas that seemed odd to people at first, but after thinking about it for awhile they would conclude that Josh usually had good reasons - in fact highly logical ones - for thinking the way he did.
Josh got his first job when he was 15 years old working for a nursery that specialized in selling potted plants. On his first day, the proprieter explained that he was to spend the afternoon pulling weeds from pots containing seedlings that had sprouted a week or two before. Josh tried not to feel intimidated by the seemingly endless rows of potted plants that stretched into the distance. Wanting very much to make a good impression, he plunged right in and worked very hard and very quickly; so quickly, in fact, that he finished the job an hour early.
Josh expected that the owner would be very pleased with his quick work and might even allow him to go home early, and so he was quite shocked to find the owner literally purple with rage, striding back and forth in front of the tables of potted plants and tearing at his hair. It emerged that Josh had misunderstood the instructions, and had pulled the seedling from each pot, leaving the weeds behind. With his entire inventory now scattered on the floor, the owner faced the loss of a whole season of business. To him, Josh's error seemed monumental, epic even, and he fired Josh on the spot.
To Josh on the other hand, the error - and there was no denying it was an error - seemed a very minor one. Depending on how information is stored in the teenage brain (and who are we to know how information really is stored in the teenage brain?) the error could amount to a single bit, a 1 digit stored someplace in place of a 0. A single bit error is the minimum error possible, and so we begin to understand why Josh was puzzled.
When our era finally assumes its proper place in history, it may well be that the concept of predictive text will become one of its most enduring legacies. You know: you type "perh" into your smart phone, and the phone instantaneously offers you a menu of selections representing the most probable continuation - in this case, almost certainly "perhaps." The trick is possible owing to fast computing, and large stored matrices that give probabilities for the next character in your magnum opus, conditional on what has gone before.
The concept of predictive text raises many intruiging questions. For example, at any point early in the composition of a piece of text there is always a most probable continuation, based upon the statistics of everything that has ever been written in English ( assuming at least one author has begun the same way in the past.) Thus, if I begin "T", it is likely my next letter will be "h". If I begin "To market, to market, to buy a fat...", then it is likely my next letter will be "p". These two examples, however, represent opposite extremes of the phenomenon. In the first case, statistical analysis of pairs of English letters suggests that "T" is most often followed by the "h" of the word "The", since that is the way many English sentences begin. The second place choice is probably not too far behind.
On the other hand, "To market, to market, to buy a fat pig" happens to be the beginning of a standard nursery rhyme. It is possible that "pig" represents the only continuation in the annals of human expression, rendering the continuation "p" absolutely certain on a purely predictive basis.
This raises the question: how often is it that what we write is truly original? I am fairly certain that no previous written essay has ever begun precisely the same way as this one, and that, therefore, no reasonably straightforward predictive algorithm could forsee that the letter following the end of this sentence will be "O". On the other hand, sophisticated algorithms steeped in the rhythms of human argumentation might well have forseen that the continuation would be "O." Given arbitrary computer power and total lack of memory constraints, it is downright spooky to contemplate how well computers might predict what we write.
If we imagine a predictive system able to call upon truly unbounded resources, then it is interesting to speculate about what kind of story it would produce, given a randomly selected beginning. For example, if I begin "The little...", then there is almost certainly a unique (or, at any rate, small number of equally probable) continuations based upon selecting from an assay of all writings that ever began with those words. What would the continuation be? I actually have no idea, but that ignorance does not bear on the possibility of selecting a most probable continuation with the aid of computing power.
At each stage then, we ask, "what is the longest phrase at the end of what has already been said that at least one person has said before?" Assuming there is not a unique such utterance, choose the next letter at random based on the statistics of what has followed all those previous statements. What would happen? No doubt, occasionally the magic utterance would be railroaded into finishing some famous unique phrase, such as "Four score and twenty years ago...", but once the inevitable quote has run its course, all bets are off. How would someone most likely continue if they had to FOLLOW the Gettysburg Address?
It has become fashionable to bash fat people. They are the new smokers, displacing that foggy lot to the role of the formerly oppressed, like the forgotten lepers who once were bundled off to Molokai Island. Smokers originally went out of fashion when people realized that they were costing society money. Smokers tend to begin their death spiral earlier in life, and to die in messier, more costly ways than the health-conscious. Their presence in the risk pool weighed all of us down, and we resented it.
In the case of fat people, the arguments are similar. They are subject to greatly higher incidences of heart disease, diabetes, and assorted other ills. They are absent from work more often. They stink. They tend to breed fat kids. They eat more than their fair share! Why shouldn't they be taxed, or fined, or ... whatever it takes to get back from them resources that were never owed them in the first place?
Why shouldn't the playing field be level?
Every time I drive into a state park and pay one of their now ubiquitous usage fees, I recall somewhat sadly how state park usage and other "optional" activities were once funded collectively, with taxes. Then some people began to ask themselves, "I don't use the park. Why should I have to pay for access by a bunch of tree-hugging liberals?" If you use something, you should expect to pay for it; otherwise, it should be something "other people" pay for.
To an extent these arguments have merit, but I grow increasingly wary of the logical conclusions toward which they tend. I imagine a future dystopia, made possible by medico-electronic technology, in which we are all emplanted at birth with various monitors that track our true consumption of oxygen, water, and food calories; our hours of access to precious sunlight; our production of CO2 and other waste products; and the waste heat we generate in work, coitus, and more frivolous activities (some of these counting on one side of the ledger, some on the other.)
We are saddled at birth with a mountain of maternity debt, and the meter tied to our O2 usage picks up with our first post-natal gasps. Throughout our unproductive childhoods, the bills continue to mount higher with every unnecessary summer visit to the ice cream stand; but now, at last, we begin to rally somewhat, as our positive efforts in school - valued, and rewarded, by society - begin to eat into our debt ever so slightly. (If, that is, we do well in school.)
Multiple strategies exist to navigate such a world. One is to hunker down and avoid costs to the extent possible. The hunkerer mainly sits around in a zen-like state, avoiding the elevated metabolism - and attendant costs - brought on by unnecessary excitement. Perhaps with some modest and unchallenging job he might hope somebody to square his original debt. He might even hope to afford the dignity of that one final consumptive act - the occupation of a 2x6x6 foot chunk of mother earth on a quasi-permanent basis.
The go-getter, on the other hand, attempts to win the game by increasing her revenue stream. She takes on a very challenging job and tries to excel in it. She raises a large and productive family and cultivates a network of friends and colleagues. Breathless, and sweaty from the effort of it all, she of course must accept a higher rate of debt accrual; but, like the hunkerer, with any luck she may hope to achieve that ultimate goal in this brave new world: to die a free person, having, in the final accounting, given more than she got.
We all repeat ourselves, and as we get older - as we age - it gets closer, approaching to, too too. Nevertheless, it is amusing, and interesting from a mathematical point of view, to imagine a world where any repetition is not only frowned upon, but strictly forbidden. It is important here at the beginning to stress the word ANY. You will have to forget about words like aardvark. It will only come out as ardvark. (Which does'nt sem like such a terible change!)
You will also need to refrain from stammering, revisiting the same old tired points in meetings, engaging in much doo-wop singing, and the like. Hic balbutior non valet. Yet, it doesn't seem unreasonable that, with a little much needed revision in spelling standards, and proper attention to being fresh and original in what we say, we couldn't go babbling on into eternity without ever saying precisely the same thing twice.
But no. It follows from some quite clever mathematics, that if you limit yourself to a fixed finite alphabet (the 26 letters of the alphabet plus common punctiation marks, say,) and eschew double letters, repeated words, repeated phrases, etc, then ultimately you will run out of new things to say!
The limit is imposed by the size of one's alphabet, as we can easily illustrate by taking the exteme case of a binary alphabet - the long string of zeros and ones that my computer stores behind the scenes when I type a document like this one, for example. A complete lack of repetition at the binary level is very restrictive indeed. For example, if I set out to create 10011101001, then all adjacent zeros and ones would have to be merged to avoid repeated digits, leaving me with the sing-song alternation 1010101; but then, even worse, the alternating 01s would collapse to just 101. If I try to say anything beyond this, then I either stay in the same place (1011), or lose ground (1010 -> 10.)
If we lived in a singing world where only the 5 vowels were permitted, then it turns that we could say at most 332,381 different things before being forced into some kind of repetition. This is a consequence of a 1952 result due to J.A. Green and D. Rees. See, e.g., M. Lothaire, Cominatorics on Words, Cambridge University Press, Cambridge, UK, 1997, pp. 32-34.
When, on the day of reckoning, all repetition has been squeezed from your lifetime of discourse, what shall have been your contribution? And how should we all proceed, so as to put off as long as possible the inevitable day we are forced into solecism?
I am attempting to explain the concept of statistical significance to Sally, a student who has come to my office hours.
Me: Suppose there are two coins, one of which is normal, with heads on one side and tails on the other, while the other has been improperly struck and has heads on both sides. I am given one of the two coins, and attempting to determine which one, I toss it five times, obtaining five heads. This is a highly unlikely outcome for a fair coin, and so on the basis of Statistics I decide that I have been given the unfair coin.
Sally: Why don't you just turn it over and see if it has heads on both sides?
Me: This is a highly simplified - perhaps oversimplified - example designed to illustrate a point.
Sally (rising to go): I guess I just don't understand Statistics.
In the middle of a run the other day I decided to try to do the length of a certain upcoming road in less than or equal to a certain time, as a kind of workout. I was halfway down the road and really starting to hurt when a familiar voice in my head started in on me. "You might as well quit," it said. "No one will ever know you didn't do it." It was right. I was running by myself and the road is in a remote and unpopulated area. I kept running. The voice came again, with an even stronger argument. "No one will ever know you even set out to do it, much less didn't do it."
I have learned how to handle that voice. I did it.
The world is a very unfair place. The few, the priveleged class, enjoy a life of pleasure and ease. They are very well satisfied, while the many endure lives of increasing squalor in order to support the excesses of the few. Eventually their lot becomes intolerable, and they rise up. There is a calamity, a revolution, and the established order is torn down. In its aftermath, everyone lives in squalor, and nobody is satisfied; but, at least it's fair.
Not everyone is willing to endure a life free from all pleasures, and especially one that is equivalent to everybody else's. A few of the discontents strive to better their situation, to "get ahead". (They rationalize that they are doing it for their children.) Even fewer succeed, but they are the ancestors of the next priveleged class. And so, the cycle begins anew.
I challenge you that, with a bit of thought, you can find a way to argue that you (and your kind) are secretely the ones in charge of the world. Take plumbers, for example. Most people, when asked, would not place plumbers very high on a scale of social status, but when needed, their status soars to giddy (if temporary) heights. When one of them comes to my house, usually in urgent response so some sudden, costly, and ongoing emergency, I am tempted to bow down to their skills in deference.
I am an academic, and we too are not highly ranked by the public. "Those who can't do, teach", goes the refrain, and "Those who can't teach, teach teachers." There is some truth in it, but what the public fails to grasp is the extreme wisdom of placing oneself in this position. A respondent on an email discussion group recently chided me online for "having stuck my finger in a moving fan." I responded to her thus: I'm an academic, and our purpose is to make people feel antsy and uncomfortable. If we goad them enough, then they will stick their fingers in the moving fans, thus saving us the trouble.
We, the academics, teach those who will go out and run the world. I say that, arguably, that makes us the ones really in charge. Moreover, it leaves us free to do what we really want to do: stare out the windows stroking our beards (if we have them), lost in thought.
As a teacher of mathematics I am commonly exposed to instances of flawed reasoning by students, perhaps not moreso than my colleagues in other fields, but flaws in mathematical reasoning stand out more clearly than in any other discipline. Last week, for example, a student came to my office hours demanding to know why he had received a zero for his "proof" of the Pythagorean Theorem. The gist of his argument was this: starting from the Pythagorean Theorem he deduced through perfectly valid steps that two figures known to have equal areas did, in fact, have equal areas. If the Pythagorean Theorem, assumed as an hypothesis, leads to something true, it must be true itself, no?
No. This is an instance of the logical fallacy known as affirming the consequent. In purely logical terms, the format of this flawed argument reads:
A -> B ("A implies B")This is vaguely similar to the perfectly valid rule of deduction known as modus ponens:
B (B is known to be true)
A (Therefore, we may deduce that A is true.)
A -> B ("A implies B")In the case of affirming the consequent it may be that A actually is true (depending on what A actually stands for), but its truth does not follow from the argument given. Consider, for example, the following, in a which an obviously true conclusion is derived from an obviously false premise: Socrates is a woman. A woman is mortal. Therefore Socrates is mortal.
A (A is known to be true)
B (Therefore, we may deduce that B is true.)
Affirming the consequent, in any of its variant forms, is the most common flaw in reasoning I observe in junior and senior math majors, this despite the fact that all of these students have taken our "intro to abstract math" course where we attempt to beat out of them this and other logical boners. My theory is that it is a case of over-generalization of the (valid) format of proof by contradiction - a format students seem to adore, probably for no better reason than that you always know how to get started in a proof by contradiction. In a proof by contradiction, you begin by assuming the opposite of what you intend to prove and attempt to see where that assumption will lead. If it leads to something known to be wrong, then you have succeeded in the proof. Carelessly tossing a couple of denials into this valid format leads to the fallacy of affirming the consequent.
Let's say you are a scientist who has just written an important breakthrough paper but you loathe the entire peer-reviewed journal system with its sloppy editors, lazy and remiss referees, and its unearned status as arbiter of all that is new and noteworthy. You'll be damned if you'll send your paper into the maw of this corrupt institution, and yet the ethics of science demand that you make your discoveries available to other researchers. So you decide to put the paper on your personal website where it is freely available to all, Nobel prize winners and street people alike; where all the mighty may look upon your work and despair. Time to sit back, put your feet up, and let the money roll in.
Sadly, what's to stop an unscrupulous imposter at some future date from grabbing a copy of the paper from your website and submitting it to one of those journal editors as his own? Well, you say, you could simply point to the copy on your web server with its time stamp from years ago and take the guy to court. Unfortunately, such time stamps could be faked, and a clever lawyer could get the jury to decide that it is equally likely that you are the imposter: relics can always be created with the appearance of age, can't they?
I will argue here, however, for the existence of a system that would make it possible to time stamp computer files in a way that could not conceivably be faked. This system is based on a hypothetical computer program called the Verifier. The Verifier is open source so anyone can compile his own copy at any time and check that it is correct. The Verifier encrypts the paper using a key that it builds from a secret key supplied by the author and the largest Mersenne prime known at the time, which it finds via the internet. The encryption is done in such a way that the Verifier can check that the encrypted file could only have been encrypted by itself. The author keeps a copy of the encrypted file in addition to the freely available plain text version on his web site.
Fast forward twenty years to a courtroom where the author is trying to establish priority over a bad guy who has just attempted to publish a copy of the author's work as his own. The jury builds their own copy of the Verifier from certified source code and the author supplies them with his secret key. The decryption produced by the Verifier is seen to be gibberish. The Verifier is then asked to try successively older largest Mersenne primes. When it arrives at the record prime from 20 years ago, a clean copy of the paper is produced. The author is vindicated and the court adjourns.
Hmmm... I like this idea so much I think I'm going to run the whole unblog through the Verifier, so don't try any funny business, ye mighty.
Did you ever wonder where files go when you click on "Empty Recycle Bin"? What happened to all the girlie pictures you hid from your wife by clearing your browser history, or what ultimately became of those emails you thankfully deleted before sending them to your boss? Well, brace yourself, because I'm finally going to reveal the answer.
It's a little known fact that every home computer built since a pimply faced Bill Gates and Steve Allen assembled the first one in their garage has come equiped with a special channel that connects directly to the Kremlin. And since then, every single binary peccadillo you blithely consigned to the bit-bucket has entered a pipeline that leads straight to Moscow.
There, armies of comrades, babushkas, and other no-goodniks spend their nights and days sifting through all manner of American digital garbage in search of our darkest secrets. Russians, after all, don't need to hold steady jobs like we do. It's Socialism. They get from their government according to their needs, and in return their benevolent rulers ask only that they spend their waking hours digging up dirt on Americans.
Nikita Kruschev famously said "We will bury you", but he never actually said that. It was an imperfect translation from the Russian, a complicated language with inflections, gutteral sounds, and backwards letters that is impossible for ordinary people to understand properly. What he really said was "We will embarass you."
For Russians, who go through life getting drunk on rubbing alcohol and attempting to dance from a squatting position, embarassment is practically a way of life. Why, they reason, should the Americans be exempt from it?
Stasis - the placing of astronauts into a state of suspended animation so that they can survive interstellar trips - has long been a staple of science fiction literature. At rocket speeds attainable today, even a trip to the nearest star system beyond our own would take hundreds of years, and astronauts would grow old and die before reaching their destination. (If there is a creator, there is perhaps no greater tribute to His wisdom than the vast distances he has placed between us and other worlds we might attempt to ruin once we have finished with our own.)
In A.E. Van Vogt's classic story, "Far Centaurus", astronauts on an epic mission to reach Proxima Centauri awake from their long sleep to find that progress has marched on, and that trips to their destination, and even far beyond, have become routine. While they have been asleep, their slow antiquated vessel has become little more than a quaint tourist stop on the road to the stars.
Science fiction writers are probably correct that some use of stasis will be necessary for practical travel over interstellar distances, but I will argue here that they have it backwards: the people who need to enter the sleep chambers are not the members of the crew, but their friends and loved ones who stay behind on Earth.
Everyone who has studied modern physics knows that nature has, in the constant and invariable speed of light, placed an absolute speed limit on material objects. Nothing can travel faster than light, and while light covers an amazing 186,000 miles every second, even this rapid pace falls far short of the speeds needed to make practical travel to all but the very closest stars within a human lifetime.
Nature has, however, left us a small loophole. According to Einstein's Special Theory of Relativity, the clocks on board ships travelling near the speed of light would seem to run slower than those back on Earth. This effect, known as "time dilation", becomes more pronounced as the speed of the ship relative to the Earth increases. If a starship could travel sufficiently close to the speed of light, it might make the round trip to the center of the galaxy and back in mere weeks, while to those back on Earth it would seem that more than a hundred thousand years had passed. While the astronauts themselves might return hale, healthy, and still young, everyone they ever knew would long ago have passed into dust. Given this reality, very few would choose to make the human sacrifices interstellar travel would demand.
If, on the other hand, family members of the astronauts were frozen, or by some other means placed into a state of suspended animation, the travellers could at least look forward to seeing a few familiar faces when they returned. The technology for such suspended animation already exists. Those with a terminal illness and sufficient funds can arrange to have their bodies frozen until such time as medical science has found a cure for them. In the fullness of time, they reason, progess against all known diseases, and even ulimate victory over them, will inevitably ensue, just as we now routinely treat maladies that were incurable in the middle ages. Unfortunately, it is perhaps equally probable that society, and medical science along with it, will decline and fall in the nonce. In a desperate future of poverty and need, will anyone remember to throw the switch that brings our poor frozen a-listers back to life?
A stasis-based space-faring society would have to be extremly stable over very long time spans. It could be organized around "clans" - groups of individuals related by blood or by choice, who stick together during the long periods of time necessary for interstellar travel. Periodically, part of the clan would depart to conduct clan business in some remote star system, while the rest would stay behind, doing their part to contribute to the maintenance of the home world for a time period roughly equal that experienced by their space-faring clan-mates. Then, into the stasis tubes for the long sleep until their astronaut brethren return to wake them. Since there would be many such clans, all operating independently of the others, somebody would always be awake to do the routine chores needed to keep the world running. To both those travelling and those sleeping, space travel would seem to take a reasonable amount of time, and their lives might not be so different from ours.
Thanks to a bout of Neflix binge-watching, I reached the 15G mark on my "unlimited" internet contract for the first time this month. When this happens, according to the contract fine print, you can continue to download and upload all the data you are able, but at a lower speed. Your account has been "throttled". Coming as it did on the heals of the repeal of "Net Neutrality", this really set me off.
When I am in the ISP's low usage good graces, I am suffered to use data at the rate of 12 Mbits/second. After being throttled, the rate drops to half that amount. "Never mind," I say. "I'll show them. I will program one or more of my home machines to fill up any unused network bandwidth, at any and all times of day, with useless traffic whose only purpose is to tie up their worthless network to the maximum my data rate allows. I'll stop doing this when my usage data is reset at the end of the month, until the next time they try to put a muzzle on me." Childish, I know. But satisfying.
Perhaps this would be illegal. It smacks of "denial of service attacks", a now dated hacking technique which flooded uprotected internet servers with so much traffic that legitimate users were squeezed out. My hypothetical "attack" on my ISP would be insignificant in scale - I doubt my provider would even notice, though they might feel threatened if the idea took off and went viral. Moreover, the only service denied would be my own. I've paid for every bit my bot would send out and receive, and naturally I'd set things up so that my own usage took precedence. Even during the times I'm in bed and my automaton is downloading random stuff at maximum speed, who's to say it is not being used for some legitimate, even scientific, purpose? For example, the program could be designed to download every available page on the world wide web, one after the other. Probably 99.999% of it winds up in the bit bucket, but who's to say that I'm not just an indefatigable surfer with eclectic tastes?
It would not be very difficult to write a web client program that would generate an endless list of plausible URLs, attempt to download a list of documents from each one, request a copy of each document in turn, and then go on to the next site. Many requests would fail, but that uses up bandwidth as well. All received data would be ignored. Such a client might be triggered by a screensaver, and might spawn parallel copies of itself until all available bandwidth is in use.
The idea of such a program lays to rest the whole concept of "unlimited" internet access. With truly unlimited access - arbitrarily large amounts of data at arbitrarily high speeds - any pimply-faced anti-social teen could easily generate staggeringly large amounts of internet traffic that appears to be entirely legitimate. Without some throttling mechanism, an ISP is vulnerable to a crowd- sourced attack in which all of its users (or many of them) use all of their available bandwidth all the time. Instead of downloading screen-savers like "seti@home", which try to do something useful during idle periods, users unhappy with the provider could download one that does useless things, as fast as possible, during idle periods. As a result, the ISP could handle fewer users on average, and profits would drop. Since this can never be allowed to happen, we have throttling. Indeed, a moment's thought shows that the only truly protective throttling system would be one that continues to cut back rates to ever smaller and smaller values as usage continues to exceed predetermined quotas. There is nothing unlimited about this: it is just quotas of a more complicated kind.
A comfortable lull had descended on Professor Peabody's study. He and his companion, Doctor Stolzfus, found ample companionship in their silent recollections of the repast gone by, and in their after dinner drinks. Presently, Peabody cleared his throat and said, "Did I ever mention the strange case of young Sam Morrison? Not his real name, of course."
"I can't say I recall your having done so," answered Stolzfus from the depths of his lounge chair. "How did he present?"
"Oddest thing," the professor continued. "He woke up one morning to find that his senses of hearing and sight had been swapped. Credit to the poor Devil he didn't lose his sanity at once."
"Do you mean to say that he heard things with his eyes and looked out of his ears?", said the doctor, an expression of frank disbelief on his face.
"Well, more like some kind of sudden misconnection, or misdirection of signals in his brain. An eye is an organ designed by nature to respond to light, not to vibrations of air pressure. You cannot hear with one. Whatever sights his eyes were seeing were being redirected to the part of his brain that normally processes one's hearing."
There was a long period of silence as Stolzfus attempted to work through the implications of such an arrangement.
Peabody went on: The first person to notice anything amiss was his wife, who complained that he would follow her everywhere she went whenever she was speaking. "It got on your nerves the way he was always in your face", she said. "I prefer the way he was before, when he'd wouldn't look up from his notepad no matter what I was talking about."
After an initial period of helpless confusion, Morrison had appeared to adapt well to his sensory re-wiring, and his behavior eventually became almost normal. He did, however, retain certain quirks. His depth perception was uncanny; and he had an amazingly detailed short term memory for things in sequence.
"The average person," Peabody went on, "can easily recite short lists of digits - phone numbers, for example - after having heard them only once. Only very rare individuals can go beyond 9 digits with any degree of accuracy. Morisson was able to spit back thousands of random digits he had just heard for the first time, and never make a mistake."
"When you think about it," he continued, " a digit heard by a normal person fades completely from the mind before the next one is heard, and must be retrieved later from memory ( if it can be retrieved at all. ) The same sequence of digits, when processed by the visual cortex, just become part of a much bigger "picture". They, and all the other digits continue to occupy the same positions in this picture for some time, until .... the next picture is assembled, I suppose. In the meantime they continue to be right there in Morrison's perceptual space."
"What about his depth perception?," Stozfus wondered. "If I process a sight as if it were a sound, why does that give me better depth perception?"
The Professor sighed. "This is all quite conjectural, but consider: when you listen to the world around you, whether you are in the quietest room or on the noisiest street corner, what you are really listening to is the passage of time."
Stolzfus offered only a skeptical grunt, so Peabody went on to explain how modern physics teaches us that it is the messy, random goings on in the world around us that give shape to our perception of the passage of time, that give time a direction and a sense of pace. "Sit in what seems at first absolute silence deep within a cavern, and you will soon begin to hear the gentle soughing of subterranean air currents, the hollow sound you hear when you put a conch shell to your ear. Sit long enough, and you will begin to hear your own heartbeat. Both of these are just the rustling of Lady Time's skirt." He finished with a poetic flourish that would have surpised his students.
"I still don't see the implications of this for one's perception of depth," the doctor said with a grumble.
"We've just seen that the main difference between visual and auditory perception is that auditory perception is primarily one dimensional, while visual preception is three dimensional. My hunch is that Morrison sees in a more one-dimensional way. Perhaps he relates to external objects mainly in terms of their distance from him, or, equivalently, the amount of time it would take him to reach them. This would tend to exaggerate the importance of depth perception."
This seemed to satisfy the doctor, and both men turned to their cigars with a simultaneity born of many previous evenings like this one.
"What ever became of young Sam?" the doctor asked at length.
"I think he went on to become moderately sucessful in certain fringe areas of the art world, and was reasonably well off when he died. In the end, the most remarkable thing about Morrison's affliction is how little difference it made," concluded the Professor.
Stolzfus took a quick swig from his cognac bottle, swirling it in his mouth for some time before he swallowed. "Your Remy has a wonderful nose, Peabody", he said. "I compliment you on your taste."