Saturday, June 8, 2013

Bland introspection, Pt. 1

I spend an inordinate amount of time questioning my self-identity, but to what end? Upon first consideration I feel that were I to possess a consistent image of myself I would then better know how to proceed in the universe, how to confront both its blatant and latent uncertainties. This approach seems almost dogmatic, however, as though a formal statement of my capabilities and interests would engender an airtight algorithm by which I could apprehend (all of) experience. One unfortunate consequence of such an approach would be to stymy any personal growth on my part. To flesh this point out, let me quote Richard Feynman from The Pleasure of Finding Things Out--
We are at the very beginning of time for the human race. It is not unreasonable that we grapple with problems. There are tens of thousands of years in the future. Our responsibility is to do what we can, learn what we can, improve the solutions and pass them on. It is our responsibility to leave the men of the future a free hand. In the impetuous youth of humanity, we can make grave errors that can stunt our growth for a long time. This we will do if we say we have the answers now, so young and ignorant; if we suppress all discussion, all criticism, saying, ‘This is it, boys, man is saved!’ and thus doom man for a long time to the chains of authority, confined to the limits of our present imagination. It has been done so many times before.
It is our responsibility as scientists, knowing the great progress and great value of a satisfactory philosophy of ignorance, the great progress that is the fruit of freedom of thought, to proclaim the value of this freedom, to teach how doubt is not to be feared but welcomed and discussed, and to demand this freedom as our duty to all coming generations.
When my thoughts drift (as they often do) toward laying a solid foundation upon which to consecrate a "temple of self" I am reminded of Feynman's admonition to humanity. It's a theme I once addressed when observing others attempt to "define" themselves to themselves and to the external world, yet I have found myself festering in the same trap. What I said to myself then was "TO STRIVE FOR SELF-DEFINITION IS TO SPONSOR SELF-LIMITATION."  Any attempt to exhaustively characterize oneself automatically establishes bounds on what one can become.

Is it then profitable, as per the Temple of Apollo at Delphi, to "know thyself"? Or am I missing a distinction here? 

Tuesday, March 20, 2012

A Very Brief Piece on How I Construe Knowledge

In formulating my perspective (I refrain from using the term ‘position’ in order to dispel any notion on the reader’s part that I will not abandon it in favor of a more fruitful one in the face of new valid evidence) on the character of any piece of our knowledge, I do not discuss in terms of  ‘absolute’ and ‘relative’ truths but in terms of the degree of certainty with which we can know something. Hence, while I will not make a statement with the strength of “all truth is relative” I will gingerly declare that “we have no way of knowing anything for certain (not even this statement).” I sidestep logical consistency in favor of epistemic breadth.
Perhaps it is that things are potentially knowable and we do not yet know how to know them, which is why I include the parenthetical appendage. Can we know the things we don’t know we don’t know? I don’t think we can ‘know’ anything, whatever that means. Suppose we rigorously define knowledge, or any term for that matter. In prestating what those things are, are we then not limiting what those things can be? Suppose we were to standardize a particular means for gathering and validating knowledge and we subsequently declared the products of all other knowledge-accruing algorithms as not up to enough snuff to be declared knowledge. You’d get some bigoted scientists.
What interests me is the manner in which various purveyors of knowledge claim their authority. Let us examine the edifices of science and religion. From my limited experience with both I have observed that practicing scientists tend to base the validity of their claims upon the use of the scientific method to work as best as it can to deliver repeatable results. Religious folk claim validity for the knowledge proffered by their religion of choice on the basis of its being the word of an omnipotent creator. Let us dissect this admittedly brutish characterization and differentiate between the two parties’ sources of authority another time.

A Question


How can one progress honestly through one’s career as a practitioner of science, or more generally as a professional intellectual, while bearing in mind the at best probable and provisional character of the knowledge that is gleaned and disseminated in his or her activities and the uncertainty that gnaws at the foundation of apparently every academic discipline?  Given the existence of such persistent cracks in the edifice, how can a scientist claim to maintain his intellectual integrity while he participates in activities other than their rectification? Is it the persistence of the cracks that dissuades him? Or is he merely ignorant of their existence?

Friday, January 6, 2012

Arjun Book

I intend to restart this blog as a means of disseminating my ideas and receiving constructive criticism of them so as to aid in refinement of their content and expression. I have also retitled it "Pardon my ignorance." from "No Way of Knowing," as I found the original title to make too strong of a claim.

Several posts will be modified entries from the written journal in which I began recording ideas in the second half of 2008, titled "ARJUN BOOK" (a more appropriate descriptor did not come to mind at the time). I will indicate specifically which of the posts that follow originated in "ARJUN BOOK."

The blog format also allows me to preserve to a degree some of the contents of "ARJUN BOOK," as the composition book's binding has started to wear and the pencil marks have begun to fade.

Owing to what I presume were at the time well-considered reasons, (they escape me now, but my best guess is that I wanted to maintain the semblance of a free stream of consciousness) I neglected to date any of the entries and at times subsequent entries jumped erratically throughout the composition book, to the present effect that tracking developments in my own thinking has become an exercise in itself. So be it.

I have long held that what we know about of the sum of existence plays a central role in how we participate in it, and so I repurposed this blog's subheading from "I care about these things" to "An informal investigation into the nature of this universe and more specifically how we perceive it and live in it."

Snarky comments are welcome, as I enjoy a good laugh.

Cheers.

Sunday, July 4, 2010

Does this make sense to you?


I had sought an explanation of biological evolution, an answer to the question as to “why” life evolves. In the process of doing so, another longstanding question of mine returned to the fore—that of whether we could answer the question “Why?”
What does an explanation do? Avoiding tautology as best I can, an explanation for a given phenomenon gives an underlying reason for its being the case. In abstract terms an explanation tells us “why” something is. But what’s going on here? It seems that in order to provide an answer as to “why” a phenomenon is, we make recourse to another phenomenon.
Example 1:
Kitchen sense recommends that adding table salt to a pot of water heating on a stove will delay its boiling, thereby allowing the water to reach a higher temperature and cooking the pot’s contents that much faster. How do we explain this observation in physical and chemical terms?
At the molecular level sodium chloride has dissociated into positively charged sodium ions (Na+1 ) and negatively charged chloride ions (Cl-1). Take a look at this diagram of the water molecule:


Although oxygen’s electrons may be found anywhere on the molecule, the oxygen atom tends to attract more electron density than the hydrogen atoms.  That is, there is a greater probability of finding an electron in the vicinity of the oxygen atom as compared to either of the hydrogen atoms. This net charge separation is known as a dipole moment. The net effect of this activity leads the region of the molecule occupied by the hydrogen atoms to exhibit a positive charge character. The oxygen atom gains a corresponding negative charge character. Hence, the positive hydrogen atom region attracts the dissociated chloride ions and the negative oxygen atom region attracts the dissociated sodium ions.
In order to evaporate a salt-water solution through boiling, energy is required to overcome the attractive forces between the ions and the water molecules as well as the intermolecular forces that maintain water in its liquid phase (we assume that the kitchen’s not too hot and is therefore initially at room temperature (20-25 °C, 68-77 °F)).
EXAMPLE 1 ENDS HERE
Does recourse to a molecular interpretation of observed macroscopic phenomena constitute explanation? In a similar manner, does invocation of large-scale activity to account for immediately perceptible phenomena do so?
It seems we have in effect ‘discretized’ our analysis of phenomena into various defined orders of magnitude and we refer between them in order to rationalize the phenomena. A crude division might be into:
Ø  The continuum or macroscopic scale, where we can safely ignore the discontinuous, molecular nature of matter without excessive loss in accuracy and treat our subjects of analysis as ‘smooth.’ Think any topic in classical physics as an upper limit where properties such as density and temperature retain meaning, where Newton’s Laws, the equations of fluid mechanics and Maxwell’s equations of classical electricity and magnetism still hold.
Ø  As “meso-” means “middle,” consider the mesoscopic scale where electromagnetic forces gain noticeable sway in comparison to inertial forces, but where we can continue avoid atomic- and molecular-scale discussions since the objects of study consist of a few thousand atoms or molecules. Mesoscale properties dominate from angstroms (10-10 meters) to micrometers (10-6 meters). Between the atomic and macro- scales we can still use classical, statistical, approximations. That is, we can still discuss the temperature, entropy, and density of such systems-- properties which lack meaning at the atomic scale. For example, temperature refers to the average kinetic energy of the atoms. We can calculate this property directly from a branch of physics known as statistical mechanics, which aims explicitly to determine macroscopic properties such as temperature from molecular interactions by applying methods of statistics to large collections of atoms and molecules. At the lower end of the mesoscopic spectrum we find ourselves at the nano-scale (1-100 10-9 meters) and therefore more or less at the level of smaller collections of molecules, wherein we can no longer ‘average out’ the behavior of individual atoms and molecules—their activity includes fluctuations about average properties such as temperature. Electromagnetic forces tend to dominate at this scale, involving such interactions as I described above in sodium chloride’s molecular interaction with water. These forces, where bonding does not result from the sharing of electrons, are collectively known as noncovalent bonds.
Let’s consider the folding of a protein into its most stable conformation, determined largely by the protein’s amino acid side chains, which can be polar or nonpolar—hydrophilic or hydrophobic. Prior to folding, a protein is a long polypeptide chain. Upon complete enfoldment in an aqueous environment, the interior ‘core’ of the protein consists of the nonpolar side chains while the polar side chains reside on the outside of the molecule where they form noncovalent hydrogen bonds to water. (See the image at the left, where the darker beads are hydrophobic and the lighter beads are hydrophilic; the top diagram is the polypeptide chain and the lower is the folded protein) In a hydrogen bond an electronegative atom of one molecule attracts an electropositive hydrogen atom (it’s more or less a proton when in a molecule other than H2 since it donates its single electron toward any covalent bond, and in fact becomes a proton, H+1, when an acid dissociates in water) of a neighboring molecule, which in this example describes an interaction between a polar amino acid side chain and the hydrogen atoms in a water molecule.
We don’t need to consider the activity of individual atoms and their electrons in order to grasp what’s going on here, although the implications of their energetic contributions tend to dominate.
Ø  At the atomic scale and below we use a different set of rules, quantum mechanics to describe what’s happening.
Ø  At the subatomic scale we run into the nucleus and its constituents, the elementary particles.
(Due to my own lack of familiarity with the above two perspectives I don’t feel comfortable delving into their impact on macroscopic phenomena, though we may construe molecular orbital and valence bond theoretical descriptions of molecular electron distributions and activity and hence most of what chemistry studies as complex interactions ‘falling out’ of quantum mechanics, as many physicists are apt to do.)
I consider boundaries between topics of analysis arbitrary, whether it involves defining fields of scientific study or defining length and time scales within which to examine phenomena. Order observed in any system is imposed on our perception of the system and is not intrinsic to it. You and I have discussed before the purpose and origin of pattern-recognition as an indication of the limitations of human cognition, insofar as we can process a finite amount of information per unit time (~140 bits/sec). Likewise, defining separate fields of study organizes a great deal of information under the umbrellas of fundamental principles, thus reducing an infinite set of volumes of observations of the world to a few cue cards ‘governing’ its actions and interactions.
Writ on these cue cards are our scientific laws, to which recourse is made when explaining observed phenomena. In effect, turning to such laws tells us that “X exhibits such properties / does such and such because that’s what everything within its category does.” Scientific laws DESCRIBE classes of behaviors. Newton’s laws of motion DESCRIBE the motions of a macroscopic body. Quantum mechanics allows us to DESCRIBE the activities of an atom. We cannot say that a phenomenon occurs because of ‘such and such scientific law’ so much as ‘the phenomenon exhibits such and such tendency and we have yet to observe a case of the phenomenon which does not.’
            An explanation for our very first example might state that water’s boiling point elevation in the presence of solute happens because of intermolecular attractions between the dissolved particles and water molecules. To do so implies a cause and an effect. However, aren’t the cause and effect one and the same? We are merely considering the phenomenon at two different length scales. We cannot separate cause from effect. Hence we must adopt a meta-perspective on the nature of explanation—Take a look at the diagrams below:

(Diagrams not available at this time)

In making molecular interpretations such as our first example we are wont to say, “this is what’s really happening.” But in actuality, it is what is happening, regardless of scale— it’s all part of reality, but we can’t observe it at all scales with our naked eyes. I believe that we should consider our descriptions of the world in accord with what I have outlined in diagram 2: as an explanatory continuum. The prospect of implementing such a philosophy withers in light of a general meta-law of physics, that a physical description at one scale need not consider the details of what happens at much smaller scales (B. Lautrup, The Physics of Continuous Matter).
Doing so necessitates introducing approximations into our calculations of physical properties. For example at the continuum scale we can ignore the random motions of the constitutive molecules leading to fluctuations in such properties as temperature and density, bumping them a little bit up or a little bit down. We must in turn introduce a suitable threshold for error in such calculations.
When describing a molecule in quantum mechanics we must solve the Schrodinger equation while accounting for the kinetic energy of the nuclei, the kinetic energy of the electrons, the contributions to the molecule’s potential energy arising from the attraction between the nuclei and the electrons, and the repulsive energies between electrons and between nuclei. Through the Born-Oppenheimer approximation we can approximate its atomic nuclei as immobile in space relative to the motion of the electrons due to the nuclei being 2000 times more massive than the electrons. Though considered a “very good approximation,” error exists on the order of about 0.001 (Physical Chemistry: A Molecular Approach, D. McQuarrie, D. Simon). That’s apparently negligible, although corrections exist.
The magnetic moment of an electron is a measure of the direction and magnitude of its motion when subject to a magnetic field. Physicist Paul Dirac claimed by his theory that the electron’s magnetic moment in certain units had a measure of exactly 1. Experiments demonstrated that the moment lay more in the vicinity of 1.00118+~0.00003. The prevailing theory dealing with this question (quantum electrodynamics) ignored the interactions of the electrons with light. Attempts to correct the theory led to predicted values of infinity. Refinements made to both the theory and experiment put the number at 1.00115965246+~ 0.000000000020 and 1.00115965221+~0.000000000004, respectively (QED: The Strange Theory of Light and Matter, R.P. Feynman.) This theoretical error threshold is apparently admissible as well.
What does such a wide range of acceptable error tolerances tell us? For one thing it seems as though the details of our knowledge hinge on these thresholds. Consider Einstein’s modification for mass via the special theory of relativity: m becomes m0/(√(1-v2 /c2), where m0 represents the mass of a body that is not moving v stands for the velocity of a body in motion, and c represents the speed of light. Regardless of velocity, any body therefore increases in mass by what an infinitesimal (yet tangible and calculable!) quantity, whether it be a top spinning on a table or a car ‘speeding’ along at 80 mph. Within the scope of error tolerances as those described above, though the principles of special relativity supposedly hold for all real objects, the changes in mass are negligible at everyday speeds. However, we have seen thus far that such negligible quantities hint at unaccounted-for behaviors and interactions across multiple length and time scales.
Such successive approximations would suggest the propagation of error in the prediction of macroscopic properties when starting from a subatomic scale and working one’s way up. This remains an open question for me. I’ll let you know when I find out the efficacy of the aforementioned corrections to the approximations.
            Can we then, in any circumstance, answer the question, “Why?” Does one phenomenon ‘cause’ another like pushing a domino over will knock over the next and subsequently all dominoes in an arranged circuit, analogous to claiming that activity in one spatial or time scale explains observations in another? Or are cause and effect united in a constant ‘flow’ independent of our attempts to discretize and impose order upon them?  On a cosmic spatiotemporal scale, his would suggest that every event in universal history ‘flowed naturally’ from the initial state of the universe, if you catch my drift. Please let me know if you would like me to clarify any of the above points. I’ve encountered difficulty in communicating them to others. My analysis lies far from conclusive; I plan to continue to think, research, and write on the topics discussed.

Sunday, February 28, 2010

What to do?

Our individual consciousnesses permit each of us to experience only one interpretation of the world. Do we have any way of knowing whether an objective reality exists independently of our senses, given that our consciousness limits us to only experiencing our own experiences?

How do we organize our phenomenal experiences? We attempt to incorporate them into existing patterns of thought, 'filing' them away in the appropriate folder in the appropriate cabinet. When then do we add new folders, moving to new paradigms for organizing our thoughts, frameworks for understanding the world? Would such a move require a catastrophe, in the sense of dynamical systems (a drastic shift in the behavior of a system resulting from changes in parameters, for example how raising water's temperature by 1 degree C from 99 degrees C at atmospheric pressure results in its transition from a liquid to a vapor)?

Assuming an objective reality exists, making sense of it would seem to reduce to constructing metaphors for what we observe in terms of what we already know, until the occurrence of such a dramatic event as described above. How do we add new terms to our 'vocabulary' for constructing such metaphors? New phenomenal experiences.