Science

Science is a systemic enterprise that builds and organizes knowledge in the form of testable explanations of causation in the universe. Science is abductive_.

Science is organized skepticism

Since classical antiquity, science as a type of knowledge has been closely linked to philosophy. In the early modern period, the words "science" and "philosophy of nature" were used interchangeably. By the 17th century, natural philosophy_ was considered a separate branch of philosophy.

1   Etymology

"Science" is Greek for "knowledge" and it covers all of it, including a priori and a posteri.

Traditionally, scientific and artistic disciplines had the suffix "-ics". For example, academics_, acrobatics_, acoustics, aerobics_, aerodynamics_, aeronautics_, aesthetics, arithmetic_, asceticism, ballistics, ("of life" biotic, ceramics, ("of alchemy") chemical, cleric_, civics_, classical mechanics, cybernetics_, cyclic, diagnostic, ebonics_, economics, (eu "good" + genos "birth") electronics_, empiricism, epic_, ergonomics, eugenics_, ethics, genetics_, ("of or pertaining to drawing") graphics_, gymnastics, haptics_, heuristics, hieoglyphics_, hydraulics_, logistics_, linguistics, logic, ("for the lyre") lyrics_, mathematics, medic_, metaphysics, mnemoic, music, mystic, neurotic, nootropics, optics_, phonetics, photogenic, physics, plastic, (polis "city") politics, pragmatics, proxemics, ("pertaining to the people") republic_, rhetoric, robotics, romanticism_, schematics, semantics, semiotics, skepticism_, statistics, stoicism, (taxis "arrangement") tactics, traffic_.

Like the suffix "-ic", is it from Greek -ikos "in the manner of; pertaining to" and seems to be a way to make such adjectives into nouns meaning the study of a subject, though not all such adjectives become such words. Examples of words that are adjectives with not "-ics" include, acrylic, "anoerexic", "agnostic", "aquatic", "alcoholic", "anorexic", "atomic", "diabetic", "cubic", "cyclic", "erotic", "Gothic", "italic", "majestic", "nostalgic, "numeric", "photogenic", "romantic", "systemic".

arctic aresenic classic clinic critic epidemic elastic ethnic garlic? generic skeptic spherical specific sonic sympathetic heretic kinetic lunatic polemic traffic tragic mosaic? public? picnic? scneic? static? dynamic? tonic toxic tunic relic rhythmic rubric rustric scientific

automatic, charismatic, cryogenic, eccentric, fantastic, honorific, lymphatic, probiotic, topic, thematic, tectonics, volcanic polemics, prosthetics, psychic, pyschotic, septic, scholastic, anesthetics, futuistic, histrionics, hydroponic, workaholic

1.1   "Scientist"

The English academic William Whewell first put the word "scientist" into print in 1834 in a review of Mary Somerville's "On the Connexion of the Physical Sciences". Whewell's review argued that science was becoming fragmented, that chemists and mathematicians and physicists had less and less to do with one another. "A curious illustration of this result," he wrote, "may be observed in the want of any name by which we can designate the students of the knowledge of the material world collectively." He then proposed "scientist," an analogue to "artist," as the term that could provide linguistic unity to those studying the various branches of the sciences. [5]

Most nineteenth-century scientific researchers in Great Britain, however, preferred another term: "man of science." The analogue for this term was not "artist," but "man of letters"—a figure who attracted great intellectual respect in nineteenth-century Britain. [5]

“Scientist” met with a friendlier reception across the Atlantic. By the 1870s, “scientist” had replaced “man of science” in the United States. Interestingly, the term was embraced partly in order to distinguish the American “scientist,” a figure devoted to “pure” research, from the “professional,” who used scientific knowledge to pursue commercial gains. [5]

“Scientist” became so popular in America, in fact, that many British observers began to assume that it had originated there. When Alfred Russel Wallace responded to Carrington’s 1894 survey he described “scientist” as a “very useful American term.” For most British readers, however, the popularity of the word in America was, if anything, evidence that the term was illegitimate and barbarous. [5]

Feelings against “scientist” in Britain endured well into the twentieth century. In 1924, “scientist” once again became the topic of discussion in a periodical, this time in the influential specialist weekly Nature. In November, the physicist Norman Campbell sent a Letter to the Editor of Nature asking him to reconsider the journal’s policy of avoiding “scientist.” He admitted that the word had once been problematic; it had been coined at a time “when scientists were in some trouble about their style” and “were accused, with some truth, of being slovenly.” Campbell argued, however, that such questions of “style” were no longer a concern—the scientist had now secured social respect. Furthermore, said Campbell, the alternatives were old-fashioned; indeed, “man of science” was outright offensive to the increasing number of women in science.

In response, Nature’s editor, Sir Richard Gregory, decided to follow in Carrington’s footsteps. He solicited opinions from linguists and scientific researchers about whether Nature should use “scientist.” The word received more support in 1924 than it had thirty years earlier. Many researchers wrote in to say that “scientist” was a normal and useful word that was now ensconced in the English lexicon, and that Nature should use it.

However, many researchers still rejected “scientist.” Sir D’Arcy Wentworth Thompson, a zoologist, argued that “scientist” was a tainted term used “by people who have no great respect either for science or the ‘scientist.’” The eminent naturalist E. Ray Lankester protested that any “Barney Bunkum” might be able to lay claim to such a vague title. “I think we must be content to be anatomists, zoologists, geologists, electricians, engineers, mathematicians, naturalists,” he argued. “‘Scientist’ has acquired—perhaps unjustly—the significance of a charlatan’s device.”

In the end, Gregory decided that Nature would not forbid authors from using “scientist,” but that the journal’s staff would continue to avoid the word. Gregory argued that “scientist” was “too comprehensive in its meaning … The fact is that, in these days of specialized scientific investigation, no one presumes to be ‘a cultivator of science in general.’” And Nature was far from alone in its stance: as Gregory observed, the Royal Society of London, the British Association for the Advancement of Science, the Royal Institution, and the Cambridge University Press all rejected “scientist” as of 1924. It was not until after the Second World War that Campbell would truly get his wish for “scientist” to become the accepted British term for a person who pursued scientific research. [5]

2   Philosophy

The scientific method rests on a set of basic assumptions:

1. An objective reality shared by all rational observers exists
2. This reality is governed by natural laws
3. These laws can be discovered by means of systematic observation and experimentation

There are different schools of thought in philosophy of science. A popular position is empiricism.

Science is dependent upon and embedded with philosophy. Any scientific understanding presupposes opinions about the way things are. Those fundamental opinions, which must be the foundations of any science, are the direct topics of reflection in thinking that is philosophic. [4]

Science does not yet understand why humans cry, have different types of tear, or find things humorous.

No cure to tinnitus.

We're not sure how many medications work.

3   Method of inquiry

The scientific method is a body of techniques for investigating phenomena, acquiring new knowledge, and correcting on integrating previous knowledge.

It consists of six (not sure exactly) steps (which correspond to the sections of a research paper): purpose, research, hypothesis, experiment, analysis, conclusion.

4   Classification

4.1   Prescience

Philosophy is like science in that both rely on observation. The problem with philosophy is not that is any less rigorous, but that it has no control over the observations. Therefore, its weakness is that there may be confounding variables, which is why it is subject to so much criticism.

Prior to the 1500s and the work of Francis Bacon, Rene Descartes_ and Issac Newton_, people did not view the world through a scientific lens. Instead, people were more concerned with the actions that dictated survival than approximating objective truth. [6]

Historically, rather than asking "what" caused some phenomenon on, people used to ask "who".

4.2   Pure and applied

Pure science focuses on knowledge for its own sake. Applied knowledge focuses on knowledge of demonstrable utility.

4.3   Qualitative and quantitative

Exact science science deals with predictions and verification by observation, measurement, and experiment.

5   History

Kuhn argued for an episodic model in which periods of such conceptual continuity in normal science were interrupted by periods of revolutionary science. [0]

5.1   Periods

5.1.2   Islamic Golden Age

Public hospitals established during this time (called Bimaristan hospitals), are considered "the first hospitals" in the modern sense of the word, and issued the first medical diplomas to license doctors. The government paid scientists the equivalent salary of professional athletes today.

5.1.3   The Copernican revolution (1543)

The Copernican Revolution was the paradigm shift from the Ptolemaic model of the heavens, which described the cosmos as having Earth stationary at the center of the universe, to the heliocentric model with the Sun at the center of the Solar System.

The Copernican Revolution started with the publishing of the book De revolutionibus orbium coelestium by Nicolaus Copernicus. His book proposed a heliocentric system versus the widely accepted geocentric system of that time.

5.1.4   Newton, Principia (1687)

Sir Isaac Newton's Principia concluded the Copernican Revolution. The development of his laws of planetary motion and universal gravitation explained the presumed motion related to the heavens by asserting a gravitational force of attraction between two objects.

5.1.5   Darwin

Charles Darwin published The Origin of Species on November 1859. There was a huge initial resistance, but eventually much of the world was convinced by his arguments.

Darwin was probably influenced by Malthus, who he had been reading recreationally. Malthus proposed that populations outpaced food supply, which Darwin realized would cause natural selection.

5.1.6   The scientific revolution

The scientific revolution was the emergence of modern science during the early modern period, when developments in mathematics, physics, astronomy, biology (including human anatomy) and chemistry transformed views of society and nature.

5.1.7   Einstein

Eistein wrote four of his most important papers in 1905: one on relativiey, another on the photovoltaic effect, another on Brownian motion, and another on measuring the size of atoms. The paper on the photo-voltaic effect won him the Nobel Prize in 1921.

5.1.8   Fermi

Enrico Fermi was born on September 29, 1901. He completed his PhD at the age of 21 at Pisa.

Physicists call exercises in estimation and basic physics fluency "Fermi problems".

He also asked "where are all the aliens" which is known as the Fermi paradox.

Fermi was the first person to initiate a controlled and self-sustaining fission reaction. Chicago Pile-1 went critical on Decemeber 2, 1942, under the stand of the University of Chicago's Stagg Field.

7   References

 [0] Comte 1832-1842. The Course In Positive Philosophy
 [1] Popper, 1934. The Logic of Scientific Discovery
 [2] Kuhn 1962. The Structure of Scientific Revolution
 [4] Joe Sachs. 1995. Aristotle's Physics.
 [5] (1, 2, 3, 4, 5) Thony Christie. July 10, 2014. The history of "scientist". https://thonyc.wordpress.com/2014/07/10/the-history-of-scientist/
 [6] Jordan Peterson. 2018. 12 Rules for Life. Rule 2: Treat Yourself Like Someone You Are Responsible for Helping.

 [11] John R. Pierce. 1980. An Introduction to Information Theory. Chapter 1.

.

Newton's laws solved the problem of motion as Newton defined it, not of motion in all senses of in which the word could be used in the Greek sense.

Our speech is adapted to our daily needs, or perhaps to the needs of our ancestors. Except in the study of language itself, science does not seek understanding by studying words and their relations. Rather, science looks for things in nature, including our human nature and activities which can be grouped together and understood.

A theory relates and understands a portion of our experience. Laws themselves are not the whole of the theory; they are merely the basis of it. The theory embraces both the assumptions themselves and the mathematical working out of the logical consequences which must necessarily follow from the assumptions.

The ideas and assumptions of a theory determine the generality of the theory, that is, to how wide a range of phenomena the theory applies.

Maxwell's equations explain all (non-quantum_) electrical phenomena. In 1873, in his treatise "Electricity and Magnetism", James Clerk Maxwell presented and fully explained for the first time the natural laws relating electric and magnetic fields and electric currents. He showeded that there should be electromagnetic waves which travel with the speed of light. Hertz later demonstrated these experimentally, and we now know that light is electromagnetic waves.

A branch of electrical theory called "network theory" deals with the electrical properties of electrical circuits, or networks, made by interconnecting three sorts of idealized electrical structures: resistors (devices such as coils of thin, poorly conducting wire or films of metal or carbon, which impede the flow of current), inductors (coils of copper wire, sometimes wound on magnetic cores), and capacitors (thin sheets of mental separated by an insulator or dieletric such as mica or plastic; the Leyden jar was an early form of capacitor).

In one sense, network theory is less than Maxwell's equations. In another sense, however, it is more general, for all the mathematical results of network theory holding for vibrating mechanical systems made up of idealized mechanical compponents as well as for the behavior of interconnections of idealized electrical components. In mechanical applications, a spring corresponds to a capacitor, a mass to an inductor, and a dashpot or damper (such as that used in a door closer to keep the door from slamming) corresponds to a resistor. In fact, network theory might have been developed to explain the behavior of mechanical systems, and it is so used in the field of acoustics.

Some theories are very strongly physical theories. Newton's law and Maxwell's equations are such theories. Network theory is essentially a mathematical theory. The terms used in it can be given various physical meanings.

How can we describe or classify theories? We can say that a theory is very narrow or very general in its scope. We can also distinguish theories as to whether they are strongly physical or strongly mathematical.

In these terms, communication theory is both strongly mathematical and quite general.

 [12] John R. Pierce. 1980. An Introduction to Information Theory. Chapter 2.

We can learn at least two thing from the history of science.

1. The most general and powerful discoveries have arisen, not through the study of phenomena as they occur in nature, but rather, through the study of phenomena in man-made devices. This is because the phenomena in man's machine are simplified and ordered in comparison with those occurring naturally.

Thus, the existence of the steam engine gave tremendous impetus to the science of thermodynamics. We see this in the work Carnot, who first proposed an ideal expansion of gas which extract the maximum possible mechanical energy from the thermal energy of steam.

Our knowledge of aerodynamics and hydrodynamic exists chiefly because of airplanes and ship, not because of birds and fishes. Our knowledge of electricity came not from the study of lightning but from the study of man's artifacts.

2. To value understanding because of the difficulty with which understanding is won.

One might expect of Maxwell's treatise on electricity and magnetism a bold and simple pronouncement concerning the step he had taken. Instead, it is cluttered with all sorts of less matters as once seemed important.

The origins of an idea can help to show what its real content is. But to attain such understanding, we must trace the actual course of discovery, not some course which we feel discovery should have or could have taken.

Regarding information theory and point (2):

• Information theory, thermodynamics, and statistical mechanics both use a quantity "entropy".

Thermodynamics and statistical mechanics are older than communication theory.

We might conclude that communication theory somehow grew out of statistical mechanics. Actually, communication theory evolved from an effort to solve certain problems in the field of electrical communication.

In communication theory, we consider a message source, such as a writer or speaker, which may produce on a given occasion any one of many possible messages. The amount of information conveyed by the message increases as the amount of uncertainty as to what message actually will be produced becomes greater. A message which is one out of ten possible message conveys a smaller amount of information than a message which is one out of a million. The entropy of communication theory is this uncertainty. The more we know about what message the source will produce, the less uncertainty, the less the entropy, and the less the information.

The idea that give rise to the entropy of physics and entropy of communication theory are quite different.

Communication theory has its origins in the study of electrical communication.

During a transatlantic voyage in 1832, Samuel F. B. Morse set to work on the first widely successful form of electrical telegraph. As Morse first worked it out, his telegraph was much more complicated than the one we know. It actually drew short and long lines on a strip of paper, and sequences of these represented not letters of a word, but number assigned to words in a dictionary or code book which Morse completed in 1837. This is an efficient form of coding, but it is clumsy.

While Morse was working with Alfred Vail, the old coding was given up, and what we now know as the Morse code has been devised by 1838. In this code, letters of the alphabet are represented by spaces, dots, and dashes. The space is the absence of an electric current, the dot is an electric current of short duration, and the dash is an electric current of longer duration.

Various combinations of dots and dashes were assigned to letters of the alphabet. E, the letter occurring most frequently in English text, was represented by the shortest possible code symbol, a single dot, and in general, short combinations of dots and dashes were used for frequently used letters and long combinations for rarely used letters.

The choice was guided not by tables of the relative frequencies of various letters in English text, nor were letters in text counted to get such data. Relative frequencies were of occurrence were estimated by counting the number of types in the various compartments of a printer's type box.

Our modern theory tells us that we could only gain about 15 percent in speed if used another assignment of dots, dashes, and spaces to letters.

In 1843, Congress passed a bill appropriating money for the construction of a telegraph circuit between Washington and Baltimore. Morse started to lay the wire underground, but ran into difficulties which later plagued submarine cables even more severely. He solved his immediate problem by stringing the wire on poles.

Different circuits which conduct a steady electric current equally well are not equally suited to electrical communication. If one sends dot and dashes too fast over an underground or undersea circuit, they are run together at the receiving end, which may be too difficult to interpret.

Of course, if we make our dots, spaces, and dashes long enough, the current at the far end will follow the current at the sending end better, but this slows the rate of transmission. It is cear that there is somehow associated with a given transmission circuit a limiting speed of transmission. Submarine cables are slow, wires on poles are fast.

Various things can be done to increase the number of letter which can be sent over a given circuit in a period of time. A dash takes three times as long to send as a dot. So one could gain by means of double-current telegraphy. Imagine at the receiveing end a galvanometer_, a device which detects and indicates the direction of flow of small currents, is connected between the telegraph wire and the ground. To indicate a dott, the sender connects the positive terminal of his battery to the write and the negative terminal to the ground, and the need of the galvanometer moves to the right. Vice versa and the needles moves to the left.

In 1874, Thomas Edison went fruther; in his quadruplex telegraph system, he used two intensities of current as well as two directions of current. He used changes in intensity, regardless of change in direction of current, to send one message, and change of direction regardless of changes in intensity, to send another message.

Clearly, how much information is possible to send over a circuit depends not only on how fast one can send successive symbols over the circuit, but also on how many different symbols one has available to choose among.

Mathematics was early applied to such problems. In 1855, William Thomson, later Lord Kelvin, calculated precisely what the received current will be when a dot or space is transmitted over a submarine cable. A more powerful attack on such problems followed the invention of the telephone by Alexander Graham Bell in 1875. Telephony makes use of currents whose strength varies smoothly and subtly over a wide range of amplitudes with a rapidity several hundreds times as great as encountered in manual telegraphy.

Many men helped to establish an adequate mathematical treatment of the phenomena of telephony: Henri Poincare, Oliver Heaviside, Michael Pupin, and G. A. Campell. The mathematical methods these men used were an extension of the work done by Joseph Fourier in the early nineteenth century in connection with the flow of heat. Fourier based his mathematical attack on problems of heat flow on a particular mathematical function called a sine wave.

A sine wave can be described completely by three quantities:

Amplitude
The maximum height above zero
Phase
The time at which the maximum is reached
Period
The time T between maxima. Its reciprocal is called frequency ($$f$$). If the period $$T$$ of a sine wave is 1/100 second, the frequency $$f$$ is 100 cycles per second ("cps").

A cycle is a complete variation from crest, through trough, and back to cres again.

The sine wave is periodic in that one variation from crest through trough to crest again is just like any other.

Fourier proved that any variation of a quantity with time can be accurately represented as the sum of a number of sinusoidal variations of different amplitudes, phases, and frequencies. The quantity concerned might be the displacement of a vibrating string, the height of the surface of a rough ocean, the temperature of an electric iron, or the current or voltage in a telephone wire.

This is useful because of two physical facts:

1. The circuits used in the transmission of electrical signals do not change with time, and they behave in what is called a linear fashion.

"Linearity means simply that if we know the output signals corresponding to any number of input signals sent separately, we can calculate the output signal when several of the input signals are sent together merely by adding the output signals corresponding to the input signals. In a linear electrical circuit or transmission system, signals act as if they were present independently of one another; they do not interact. This is, indeed, the very criterion for a circuit being called a linear circuit."

Usually electrical circuits are linear, except when they include vacuum tubes, transistors, or diodes.

"Because telegraph wires are linear, which is just to say because telegraph wires are such that electrical signals on them behave independently without interacting with one another, two telegraph wires can travel in opposite direction the same write at the same time without interfering with one another. However, while linearity is a fairly common phenomenon in electrical circuits, it is by no means a universal natural phenomenon. Two trains can't travel in opposite direction on the same track without interference."

It can be shown mathematically that if we use a sinusoidal signal as an input signal to a linear transmission path, we always get out a sine wave of the same frequency. The amplitude of the wave may be less than that of the input sine wave; we call this attenuation of the sinusoidal signal. The output sine wave may rise to a peak later than the input sine wave; we call this phase shift or delay of the sinusoidal signal.

The amounts of attenuation and delay depend on the frequency of the sine wave.

Thus, corresponding to an input signal made up of several sinusoidal components, there will be an output signal having components of the same frequencies but of different relative phases or delay and of different amplitudes. Thus in general the shape of the output signal will be different from the shape of the input signal. However the differences can be thought of as caused by changes in the relative delays and amplitudes of the various components, differences associated with their different frequencies. If the attenuation and delay of a circuit is the same for all frequencies, the shape of the output wave will be the same as that of the input wave; such a circuit is distortionless.

The Fourier analysis of signals into components of various frequencies makes it possible to study the transmissions properties of a linear circuit for all signals in terms of the attenuation and delay it imposes on sine waves of frequencies as they pass through it.

In 1917, Harry Nyquist came to the American Telephone and Telegraph Company immediately after receiving his Ph. D at Yale. In 1924 he published his results in an important paper, "Certain Factors Affecting Telegraph Speed." Among other things, it clarifies the relation between the speed of telegraphy and the number of current values. Nyquist says that if we send symbols at a constant rate, the speed of transmission, $$W$$, is related to $$m$$, the number of different symbols, by:

\begin{equation*} W = K log m \end{equation*}

Here $$K$$ is a constant whose value depends on how many successive current values are sent each second.

If we can specify $$M$$ independent 0 or 1 combinations at once, we can in effect send $$M$$ independent messages at once. So speed should be proportional to $$M$$, but in sending $$M$$ messages at once we have :math:2^M possible combinations of the $$M$$ independent 0-or-1 choices.

Thus the logarithm of the number of symbols is just the number of independent 0 or 1 choices that can be represented simultaneously; the number of independent messages that can be sent at once.

Nyquist clearly realize d that fluctuation in the attenuation of the circuit, interference or noise, and limitations on the power which can be used, make the use of many current values difficult.

Nyquist defined the line speed as one half the number of signal elements (dots, spaces, current values) which can be transmitted in a second.

By the time that Nyquist wrote, it was common practice to send telegraph and telephone signals on the same wires. Telephones makes use of frequencies above 150 cps, while telegraphy can be carried out by means of lower frequency signals.

Nyquist showed how telegraph sgnals could be so shaped as to have no sinusoidal components of high enough frequency to be heard as interference by telephones connected to the same line. He noted the line speed was proportional to the width of the band (in the sense of the strip) of frequencies used in telegraphy. We now call this range of frequencies the band width of a circuit or of a signal.

Finally, in analyzing one proposed sort of telegraph signal, Nyquist show that it contained at all times a steady sinusoidal component of constant amplitude. While this component formed a part of the transmitter power used, it was useless at the receiver for its eternal, regular fluctuations were perfectly preditable and could have been supplied at the receiver than than transmitted thence over the circuit. Nyquist referred to this uselss component of the signal, which he said, conveyed no intelligence, as redundant, a word which we will encounter later.

Nyquist continued to study the problems of telegraph and in 1928, he published a second important paper, "Certain Topics in Telegraph Transmission Theory". In this he demonstrated a number of important points:

• If one sends some number 2N of different current values per second, all the sinusoidal components of the signal with frequencies greater than N are redundant, in the sense that they are not needed in deducing from the received signal the succession of current values which were sent
• He showed how a signal could be constructed which would contain no frequencies above N cps and from which it would be easy to deduce at the receiving point what current values had been sent.

RVL Hartley, the inventor of the Hartley oscillator, was thinking philosophically about the transmission of information at about this time, and he summarized his reflections in a paper, "Transmission of Information", which he published in 1928.

Hartley had an interesting way of formulating the problem of communication. He regarded the send of a message as equipped with a set of symbols (the letter of the alphabet for instance) from which he mentally selects symvbol after symbol, thus generating a sequence of symbols. He then defined H, the information of a mesasge, as the logarithm of the number of possible of symbols which might have been selected and show that $$H = n log s$$. Here $$n$$ is the number of symbols selected, and $$s$$ is the number of different symbols in the set from which symbols are selected.

This is acceptable in the light of our present knowledge of information theory only if successive symbols are chosen independently and all are equally like to be selected.

Hartley goes on to the problem of encoding he primary symbols (letters of the alphabets for instance) in terms of secondary symbols.

Questions for me to understand:

• What is the phase of a wave?
• What is linearity? What is a linear circuit?