Let's assume there is a world outside of our minds.
We are only able to experience aspects of it through our senses, that is, we make observations.
Unfortunately, these observations are limited: they are shaped by our perception and they stand next to each other disconnected.
Wikipedia says that we only see electromagnetic radiation between 380 and 750 nm, our hearing range is somewhere between 12 and 20K Hz, and folk wisdom has it that dogs have an up to 10K times better sense of smell then we do.
But the situation is even worse: as soon as we try to communicate our observations (and probably long before), we start to mingle them with our own preconceptions of the world.
The sun is
rising in the morning and
goes down in the evening.
Well, technically it doesn't as we know since the Copernican Revolution (and Aristarchus already suspected in antiquity), but that's what we say and what it seems like to us.
One reaction to this mess is rationalism, which holds that our senses are not to be trusted and reason is the primary source of knowledge, maybe the only one. This led to great developments in logic and mathematics and as long as we start from axioms and define our own systems, we can play the analytical game, deduce their properties and arrive at impressive results without having to deal with empirical induction and the shadows cast upon the cave wall. However, if we want to know more about the world outside, it seems like the extreme version of rationalism, that disregards all experience, doesn't deliver.
Kant tried to explain away the problem with his transcendental idealism by claiming that we cannot know anything about the things of the universe themselves, but only about their appearances (
Since the appearances are determined by what we put into them (and Kant lets us put quite a lot of stuff into them, like space and time), we can reason about them a priori, that is, before or independent of experience.
This is a nice thought experiment, but the premises are hard to swallow and it's unsatisfactory to not be able to know anything about the things themselves.
After all, this is what science is about: we want to gain knowledge about the world, something that goes beyond disconnected, unreliable observations.
So today when we talk about science, we usually talk about an endeavour under the flag of some form of empiricism. We know that our senses are limited and not to be trusted, but that's what we've got and what we have to work with.
Now the scientist needs coping strategies. One idea brought forward by the positivists was verificationism: a statement should be verifiable empirically (or true analytically), otherwise it's meaningless. Unfortunately, this proposition becomes meaningless when applied to itself. Also it is logically flawed, as we cannot verify a universal proposition from (non-exhaustive) observations. The next swan we see might be a black one.
The prevailing method embraced by science today is, in a way, the opposite: Popper's falsificationism. The only hypotheses useful for science are those that can be falsified. You can make up any hypothesis you like, you can introduce invisible entities, come up with rules that are supposed to hold between them, complex theoretical buildings within the confines of sound reasoning, but in the end you need to make a prediction about the world that can be rejected by certain observations. No matter how many times your hypothesis has been tested and didn't turn out to be false, doubt always remains. If we follow Jaynes' logic of science, a Bayesian view, the best we can hope for is that the probability of the hypothesis being false becomes smaller and smaller. This method raises some new philosophical questions about scientific knowledge.
What is the ontological status of scientific theories, especially unobservable things introduced by them? Are they just means to an end (making a prediction) or should we assume that they refer to reality? What is the nature of scientific progress? Since any theory can be rejected by new evidence contradicting it, do we get closer to the truth? If scientific theories are only useful tools for predictions about the world, is truth even a meaningful category for them? Does our knowledge converge?
One answer to these questions is scientific realism. In its basic form, it is the view that scientific theories describe reality. If physicists talk about unobservable entities like fields, forces or strings, they refer to actual things in the universe. There might be occasional inaccuracies or mistakes that eventually will be corrected, but in general, accepted scientific theories are mostly true (otherwise, how would they be able to make correct predictions?). Realists claim that this view is the only explanation for the success of science other than a miracle. In its basic and naive form, this view is easy to discredit. Looking at the history of science, there are many examples of successful theories which made correct predictions, but turned out to be false or whose entities we no longer consider to refer to real-world entities. For example, Maxwell still assumed the aether to be real and central to the understanding of physics when he wrote the corresponding entry for the Encyclopædia Britannica in 1878, from which I quote the following sentence:
Whatever difficulties we may have in forming a consistent idea of the constitution of the aether, there can be no doubt that the interplanetary and interstellar spaces are not empty, but are occupied by a material substance or body, which is certainly the largest, and probably the most uniform body of which we have any knowledge.
To escape this criticism, realists today mostly maintain a weaker variation called convergent realism. Single theories and theoretical entities may be false or non-referring, but in the long run, they converge towards the truth and they do refer to things in the universe. Philosophers of science still take issue with the weaker version. In the history of science we find many instances of new theories replacing older theories that fundamentally changed people's understanding and invalidated entities claimed to exist by previous theories (so-called paradigm shifts according to Thomas Kuhn). Also mathematically, claiming a sequence converges of which we only know the first few members, is problematic. And if it converges, how do we know it converges against the truth? On the other hand, in practice it's often convenient to adopt realist thinking. Trusting our scientific knowledge, we are able to extend our senses: we cannot immediately perceive ionizing radiation (only once we had too much of it and suffer from acute radiation syndrome), but we know that the gas in the Geiger-Müller tube becomes conductive when exposed to ionizing radiation (electrons are freed), which is amplified by the electric field in which we placed the tube (the electrons move towards the anode and knock more electrons out of the gas atoms on the way, while the ionized atoms move the other way – Townsend avalanche). The resulting electric pulse produces the dreaded crackling noises via attached speakers. So thanks to our knowledge of physics, we are able to hear the radiation, which otherwise is hidden from our senses. There are many such examples where we speak of observations in modern empirical science and actually mean indirect observations obtained based on predictions of currently accepted theories about the world.
An even weaker position called instrumentalism follows directly from falsificationism.
Instrumentalists see theories only as models that make falsifiable predictions.
Whether the entities introduced by a theory refer to things in the universe or are just constructs to get to the right prediction doesn't matter.
All models are wrong, but some are useful, as they say.
Scientific progress then means that our predictions about things we can observe are getting better and this certainly must be true because that's how we evaluate and select our theories.
A new theory will replace an old one only if it makes better predictions.
If you like lopsided comparisons, instrumentalists are the agnostics of science.
Well, it's not as easy as that and there are a few current debates and developments in which fundamental philosophical problems surface. One such development is a rather extreme form of (social) constructionism that is featured very prominently in the media and at universities and also started to gain political influence in some countries. There are concepts that don't exist by force of nature but merely by common belief or social construction, for example, political units like nation states and property rights (or really any kind of rights). For example, I pay rent every month to be able to live in a flat I don't own. The guy who owns the flat gave me the keys for the flat and he is not physically present, but still it's considered his flat. This is not due to some inherent property of the flat, but because society agrees that he owns the flat. If I stopped paying rent, changed the locks and refused to move out, he'd call the police and they would throw me out because they belief that the flat is his property and I'm violating his rights. This example seems uncontroversial, but now the question is, which concepts are constructions and what exists independent of our silent mutual agreement or our individual view of reality? What about gender, disability or crime? The position that is popular among some people at the moment is that these things are primarily or entirely constructed concepts as well. A particularly extreme point of view is to reject that there is a world outside of our minds in the first place (which was the starting point of this text) and only keep the constructions. My (social circle's) reality is not the same as yours. If we adopt this form of relativism, we can easily dismiss any scientific argument. We have no way of falsifying claims and the scientific method breaks down.
Another interesting case comes from recent advances in machine learning. We have models that make increasingly accurate predictions. For example, we now have language models that not only recognize grammatical and acceptable sentences, but also generate them flawlessly (disregarding content, textual cohesion etc.), better than any conventional linguistic model. We have models for automatic translation, parsing, inflection and anything you can imagine. If this is the case, do we consider theoretical linguistics solved or unnecessary? Apparently it's just a bit of basic linear algebra and calculus, a huge corpus of training data, billions of parameters and raw computing power. Or do we deny these models scientific value and stick with our linguistic theories that don't generalize well and need endless exceptions to cover all but a few hand-picked examples? If we reject these models, on what grounds? Because they don't reveal anything interesting about the world or because billions of parameters make them too inelegant, complex (Ockham's razor), or unlikely (Ockham's razor via probability according to Cox's theorem)? If we are instrumentalists, it can only be one of the latter.